The following 1-minute article is a follow-up to my previous article about how to use Keycloak as an authentication provider for OpenShift. In this article, I will show you how to configure Keycloak and OpenShift for Single Log Out (SLO). This means that when you log out from Keycloak, you will also be logged out from OpenShift automatically. This requires some additional configuration in Keycloak and OpenShift, but it is not too complicated.
Prerequisites
The following prerequisites are required to follow this article:
OpenShift 4.16 or higher with cluster-admin privileges
Configure the logout URL in Keycloak. This is done by adding a new Valid post logout redirect URIs to the Keycloak client configuration. In this case, we want to call the OpenShift logout URL.
Which is: https://oauth-openshift.apps.<your-cluster-name>/logout
This is done in the client settings:
OpenShift Configuration
Now we need to configure OpenShift to use the logout URL. This is done by adding a new Post Logout Redirect URI to the OpenShift Console configuration.
Modify the existing Console resource named cluster and add the logoutRedirect parameter to the authentication section.
The important pieces of the logout URL are the client_id and the post_logout_redirect_uri. The client ID must be the same as the Keycloak client ID, and the post logout redirect URI must be the OpenShift logout URL.
(Also change the <keycloakURL> and <realm> to your Keycloak URL and realm name.)
Actually, instead of client_id, the id_token_hint should be used. But OpenShift does not store the token, so we are using the client ID instead.
The logout URL for OpenShift. This is the URL that will be called when you log out from Keycloak. The URL must contain the client ID and the post logout redirect URI (console of OpenShift).
Wait a few moments until the OpenShift Console Operator applies the new configuration.
Testing the configuration
Now that the configuration is done, we can test the Single Log Out functionality.
We are logged into OpenShift already as user testuser
Now we log out from OpenShift using the Log out button in the upper right corner.
This will redirect us to the Keycloak logout page where we need to confirm the logout.
This will log us out from Keycloak and redirect us back to the OpenShift logout page.
This is it. You are now logged out from both Keycloak and OpenShift. You can also check the Sessions in Keycloak to see that the session for the user is terminated.
Conclusion
As promised, this was a short article about how to configure Keycloak and OpenShift for Single Log Out. This is a beneficial feature if you want to ensure that users are logged out from all applications when they log out from Keycloak. It is also a good security practice to ensure that users are logged out from all applications when they are done using them.
One of the most commonly deployed operators in OpenShift environments is the Cert-Manager Operator. It automates the management of TLS certificates for applications running within the cluster, including their issuance and renewal.
The tool supports a variety of certificate issuers by default, including ACME, Vault, and self-signed certificates. Whenever a certificate is needed, Cert-Manager will automatically create a CertificateRequest resource that contains the details of the certificate. This resource is then processed by the appropriate issuer to generate the actual TLS certificate. The approval process in this case is usually fully automated, meaning that the certificate is issued without any manual intervention.
But what if you want to have more control? What if certificate issuance must follow strict organizational policies, such as requiring a specifc country code or organization name? This is where the CertificateRequestPolicy resource, a resource provided by the Approver Policy, comes into play.
This article walks through configuring the Cert-Manager Approver Policy in OpenShift to enforce granular policies on certificate requests.
I was recently asked about how to use Keycloak as an authentication provider for OpenShift. How to install Keycloak using the Operator and how to configure Keycloak and OpenShift so that users can log in to OpenShift using OpenID. I have to admit that the exact steps are not easy to find, so I decided to write a blog post about it, describing each step in detail. This time I will not use GitOps, but the OpenShift and Keycloak Web Console to show the steps, because before we put it into GitOps, we need to understand what is actually happening.
This article tries to explain every step required so that a user can authenticate to OpenShift using Keycloak as an Identity Provider (IDP) and that Groups from Keycloak are imported into OpenShift. This article does not cover a production grade installation of Keycloak, but only a test installation, so you can see how it works. For production, you might want to consider a proper database (maybe external, but at least with a backup), high availability, etc.).
During my day-to-day business, I am discussing the following setup with many customers: Configure App-of-Apps. Here I try to explain how I use an ApplicationSet that watches over a folder in Git and automatically adds a new Argo CD Application whenever a new folder is found. This works great, but there is a catch: The ApplicationSet uses the same Namespace default for all Applications. This is not always desired, especially when you have different teams working on different Applications.
Recently I was asked by the customer if this can be fixed and if it is possible to define different Namespaces for each Application. The answer is yes, and I would like to show you how to do this.
Classic Kubernetes/OpenShift offer a feature called NetworkPolicy that allows users to control the traffic to and from their assigned Namespace. NetworkPolicies are designed to give project owners or tenants the ability to protect their own namespace. Sometimes, however, I worked with customers where the cluster administrators or a dedicated (network) team need to enforce these policies.
Since the NetworkPolicy API is namespace-scoped, it is not possible to enforce policies across namespaces. The only solution was to create custom (project) admin and edit roles, and remove the ability of creating, modifying or deleting NetworkPolicy objects. Technically, this is possible and easily done. But shifts the whole network security to cluster administrators.
Luckily, this is where AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) comes into play.
Lately I came across several issues where a given Helm Chart must be modified after it has been rendered by Argo CD. Argo CD does a helm template to render a Chart. Sometimes, especially when you work with Subcharts or when a specific setting is not yet supported by the Chart, you need to modify it later … you need to post-render the Chart.
In this very short article, I would like to demonstrate this on a real-live example I had to do. I would like to inject annotations to a Route objects, so that the certificate can be injected. This is done by the cert-utils operator. For the post-rendering the Argo CD repo pod will be extended with a sidecar container, that is watching for the repos and patches them if required.