Integrate Red Hat’s single sign-on technology 7.4 with Red Hat OpenShift
source link: https://developers.redhat.com/blog/2021/03/25/integrate-red-hats-single-sign-on-technology-7-4-with-red-hat-openshift/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Integrate Red Hat’s single sign-on technology 7.4 with Red Hat OpenShift
In this article, you will learn how to integrate Red Hat’s single sign-on technology 7.4 with Red Hat OpenShift 4. For this integration, we’ll use the PostgreSQL database. PostgreSQL requires a persistent storage database provided by an external Network File System (NFS) server partition.
Prerequisites
To follow the instructions in this article, you will need the following components in your development environment:
- An OpenShift 4 or higher cluster with leader and follower nodes.
- Red Hat’s single sign-on technology version 7.4, deployed on Red Hat OpenShift.
- An external NFS server.
Note: For the purpose of this article, I will refer to leader and follower nodes, although the code output uses the terminology of master
and worker
nodes.
Setting up the external NFS server
NFS allows remote hosts to mount file systems over a network and interact with them as though they are mounted locally. This lets system administrators consolidate resources on centralized servers on the network. For an introduction to NFS concepts and fundamentals, see the introduction to NFS in the Red Hat Enterprise Linux 7 documentation.
Creating persistent storage for an OpenShift 4.5 cluster
OpenShift 4.5 doesn’t provide persistent storage volumes out of the box. You can either map persistent storage manually or you can define it using the OpenShift control plane’s Machine Config Operator. See Machine roles in OpenShift Container Platform for more about the Machine Config Operator. In the next sections, I will show you how to map persistent storage manually.
Mapping OpenShift persistent storage to NFS
To access an NFS partition from an OpenShift cluster’s follower (worker
) nodes, you must manually map persistent storage to the NFS partition. In this section, you will do the following:
- Get a list of nodes.
- Access a follower node:
- Ping the NFS server.
- Mount the exported NFS server partition.
- Verify that the file is present on the NFS server.
Get a list of nodes
To get a list of the nodes, enter:
$ oc get nodes
The list of requested nodes displays as follows:
NAME STATUS ROLES AGE VERSION master-0.example.com Ready master 81m v1.18.3+2cf11e2 worker-0.example.com Ready worker 72m v1.18.3+2cf11e2 worker-1.example.com Ready worker 72m v1.18.3+2cf11e2
Access the follower node
To access the follower node, use the oc debug node
command and type chroot /root
, as shown here:
$ oc debug node/worker-0.example.com Starting pod/worker-example.com-debug ...
Run chroot /host
before issuing further commands:
sh-4.2# chroot /host
Ping the NFS server
Next, ping the NFS server from the follower node in debug mode:
sh-4.2#ping node-0.nfserver1.example.com
Mount the NFS partition
Now, mount the NFS partition from the follower node (still in debug mode):
sh-4.2#mount node-0.nfserver1.example.com:/persistent_volume1 /mnt
Verify that the file is present on the NFS server
Create a dummy file from the follower node in debug mode:
sh-4.2#touch /mnt/test.txt
Verify that the file is present on the NFS server:
$ cd /persistent_volume1
$ ls -al total 0 drwxrwxrwx. 2 root root 22 Sep 23 09:31 . dr-xr-xr-x. 19 root root 276 Sep 23 08:37 .. -rw-r--r--. 1 nfsnobody nfsnobody 0 Sep 23 09:31 test.txt
Note: You must issue the same command sequence for every follower node in the cluster.
Persistent volume storage
The previous section showed how to define and mount an NFS partition. Now, you’ll use the NFS partition to define and map an OpenShift persistent volume. The steps are as follows:
- Make the persistent volume writable on the NFS server.
- Map the persistent volume to the NFS partition.
- Create the persistent volume.
Make the persistent volume writable
Make the persistent volume writable on the NFS server:
$ chmod 777 /persistent_volume1
Map the persistent volume to the NFS partition
Define a storage class and specify the default storage class. For example, the following YAML defines the StorageClass
of slow
:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete
Next, make the storage class the default class:
$ oc create -f slow_sc.yaml $ oc patch storageclass slow -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' storageclass.storage.k8s.io/slow patched.
Note: The StorageClass
is common to all namespaces.
Create the persistent volume
You can create a persistent volume either from the OpenShift admin console or from a YAML file, as follows:
apiVersion: v1 kind: PersistentVolume metadata: name: example spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: slow nfs: path: /persistent_volume2 server: node-0.nfserver1.example.com $ oc create -f pv.yaml persistentvolume/example created $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example 5Gi RWO Retain Available slow 5s
Deploy SSO on the OpenShift cluster
Next, you’ll deploy Red Hat’s single sign-on technology on the OpenShift cluster. The steps are as follows:
- Create a new project.
- Download the
sso-74
templates. - Customize the
sso74-ocp4-x509-postgresql-persistent
template.
Create a new project
Create a new project using the oc new-project
command:
$ oc new-project sso-74
Import the OpenShift image for Red Hat’s single sign-on technology 7.4:
$ oc -n openshift import-image rh-sso-7/sso74-openshift-rhel8:7.4 --from=registry.redhat.io/rh-sso-7/sso74-openshift-rhel8:7.4 --confirm
Note: If you need to delete and re-create the SSO project, first delete the secrets, which are project-specific.
Download the sso-74 templates
Here is the list of available templates:
$ oc get templates -n openshift -o name | grep -o 'sso74.\+' sso74-https sso74-ocp4-x509-https sso74-ocp4-x509-postgresql-persistent sso74-postgresql sso74-postgresql-persistent
Customize the sso74-ocp4-x509-postgresql-persistent template
Next, you’ll customize the sso74-ocp4-x509-postgresql-persistent
template to allow a TLS connection to the persistent PostgreSQL database:
$oc process sso74-ocp4-x509-postgresql-persistent -n openshift SSO_ADMIN_USERNAME=admin SSO_ADMIN_PASSWORD=password -o yaml > my_sso74-x509-postgresql-persistent.yaml
Control manually set pod replica scheduling
You control manually set pod replica scheduling the first time it is used. Within the updated template file, my_sso74-x509-postgresql-persistent.yaml
, set both replicas for sso
and so -postgresql
to within the deployment config of sso
and sso-postgresql
.
Setting the replicas to zero (0) within each deployment config lets you manually control the initial pod rollout. If that’s not enough, you can also increase the initialDelaySeconds
value for the liveness and readiness probes. Here is the updated deployment config of sso
:
kind: DeploymentConfig metadata: labels: application: sso rhsso: 7.4.2.GA template: sso74-x509-postgresql-persistent name: sso spec: replicas: 0 selector: deploymentConfig: sso
Here is the updated config for sso-postgresql
:
metadata: labels: application: sso rhsso: 7.4.2.GA template: sso74-x509-postgresql-persistent name: sso-postgresql spec: replicas: 0 selector:my_sso74-ocp4-x509-postgresql-persistent.yaml deploymentConfig: sso-postgresql
Process the YAML template
Use the oc create
command to process the YAML template:
$ oc create -f my_sso74-x509-postgresql-persistent.yaml service/sso created service/sso-postgresql created service/sso-ping created route.route.openshift.io/sso created deploymentconfig.apps.openshift.io/sso created deploymentconfig.apps.openshift.io/sso-postgresql created persistentvolumeclaim/sso-postgresql-claim created
Upscale the sso-postgresql pod
Use the oc scale
command to upscale the sso-postgresql
pod:
$oc scale --replicas=1 dc/sso-postgresql
Note: Wait until the PostgreSQL pod has reached a ready state of 1/1. This might take a couple of minutes.
$ oc get pods NAME READY STATUS RESTARTS AGE sso-1-deploy 0/1 Completed 0 10m sso-postgresql-1-deploy 0/1 Completed 0 10m sso-postgresql-1-fzgf7 1/1 Running 0 3m46s
When the sso-postgresql
pod starts correctly, it provides the following log output:
pg_ctl -D /var/lib/pgsql/data/userdata -l logfile start waiting for server to start....2020-09-25 15:13:01.579 UTC [37] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2020-09-25 15:13:01.588 UTC [37] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-09-25 15:13:01.631 UTC [37] LOG: redirecting log output to logging collector process 2020-09-25 15:13:01.631 UTC [37] HINT: Future log output will appear in directory "log". done server started /var/run/postgresql:5432 - accepting connections => sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ... ALTER ROLE waiting for server to shut down.... done server stopped Starting server... 2020-09-25 15:13:06.147 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-09-25 15:13:06.147 UTC [1] LOG: listening on IPv6 address "::", port 5432 2020-09-25 15:13:06.157 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2020-09-25 15:13:06.164 UTC [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-09-25 15:13:06.206 UTC [1] LOG: redirecting log output to logging collector process 2020-09-25 15:13:06.206 UTC [1] HINT: Future log output will appear in directory "log".
Upscale the sso pod
Use the oc scale
command to upscale the sso
pod as follows:
$ oc scale --replicas=1 dc/sso deploymentconfig.apps.openshift.io/sso
Next, use the oc get pods
command and get the SSO pod fully up and running. It reaches a ready state of 1/1
as shown:
$oc get pods NAME READY STATUS RESTARTS AGE sso-1-d45k2 1/1 Running 0 52m sso-1-deploy 0/1 Completed 0 63m sso-postgresql-1-deploy 0/1 Completed 0 63m sso-postgresql-1-fzgf7 1/1 Running 0 57m
Testing
The testing oc status
command includes:
$ oc status In project sso-74 on server https://api.example.com:6443 svc/sso-ping (headless):8888 https://sso-sso-74.apps.example.com (reencrypt) (svc/sso) dc/sso deploys openshift/sso74-openshift-rhel8:7.4 deployment #1 deployed about an hour ago - 1 pod
svc/sso-postgresql - 172.30.113.48:5432 dc/sso-postgresql deploys openshift/postgresql:10 deployment #1 deployed about an hour ago - 1 pod
You can now directly access the OpenShift 4.5 platform using the single sign-on technology admin console at https://sso-sso-74.apps.example.com. Log in with your admin password and credentials.
Conclusion
This article highlighted the basic steps to be executed when deploying Red Hat’s single sign-on technology 7.4 on OpenShift. Deploying SSO on Openshift makes OpenShift’s SSO features available out of the box. As one example, it is very easy to increase your workload capacity by adding new single sign-on pods to your Openshift deployment during horizontal scaling.
Related
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK