Welcome to Yet Another Useless Blog
Despite the name, we hope you'll find these articles genuinely helpful! 😊
Who are we?
We're Thomas Jungbauer and Toni Schmidbauer — two seasoned IT professionals with over 20 years of experience each. Currently, we work as architects at Red Hat Austria, helping customers design and implement OpenShift and Ansible solutions.
What's this blog about?
Real-world problems, practical solutions. We document issues we've encountered in the field along with step-by-step guides to reproduce and resolve them. Our goal: save you hours of frustrating documentation searches and trial-and-error testing.
Feel free to send us an e-mail or open a GitHub issue.
Recent Posts
A second look into the Kubernetes Gateway API on OpenShift
This is our second look into the Kubernetes Gateway API an it’s integration into OpenShift. This post covers TLS configuration.
The Kubernetes Gateway API is new implementation of the ingress, load balancing and service mesh API’s. See upstream for more information.
Also the OpenShift documentation provides an overview of the Gateway API and it’s integration.
We demonstrate how to add TLS to our Nginx deployment, how to implement a shared Gateway and finally how to implement HTTP to HTTPS redirection with the Gateway API. Furthermore we cover how HTTPRoute objects attach to Gateways and dive into ordering of HTTPRoute objects.
A first look into the Kubernetes Gateway API on OpenShift
This blog post summarizes our first look into the Kubernetes Gateway API and how it is integrated in OpenShift.
[Ep.14] Reusable Argo CD Application Template
When working with Argo CD at scale, you often find yourself creating similar Application manifests repeatedly. Each application needs the same basic structure but with different configurations for source repositories, destinations, and sync policies. Additionally, managing namespace metadata becomes tricky when you need to conditionally control whether Argo CD should manage namespace metadata based on sync options.
In this article, I’ll walk you through a reusable Helm template that solves these challenges by providing a flexible, DRY (Don’t Repeat Yourself) approach to creating Argo CD Applications. This template is available in my public Helm Chart library and can easily be used by anyone.
Cert-Manager Policy Approver in OpenShift
One of the most commonly deployed operators in OpenShift environments is the Cert-Manager Operator. It automates the management of TLS certificates for applications running within the cluster, including their issuance and renewal.
The tool supports a variety of certificate issuers by default, including ACME, Vault, and self-signed certificates. Whenever a certificate is needed, Cert-Manager will automatically create a CertificateRequest resource that contains the details of the certificate. This resource is then processed by the appropriate issuer to generate the actual TLS certificate. The approval process in this case is usually fully automated, meaning that the certificate is issued without any manual intervention.
But what if you want to have more control? What if certificate issuance must follow strict organizational policies, such as requiring a specifc country code or organization name? This is where the CertificateRequestPolicy resource, a resource provided by the Approver Policy, comes into play.
This article walks through configuring the Cert-Manager Approver Policy in OpenShift to enforce granular policies on certificate requests.
Single log out from Keycloak and OpenShift
The following 1-minute article is a follow-up to my previous article about how to use Keycloak as an authentication provider for OpenShift. In this article, I will show you how to configure Keycloak and OpenShift for Single Log Out (SLO). This means that when you log out from Keycloak, you will also be logged out from OpenShift automatically. This requires some additional configuration in Keycloak and OpenShift, but it is not too complicated.
Step by Step - Using Keycloak Authentication in OpenShift
I was recently asked about how to use Keycloak as an authentication provider for OpenShift. How to install Keycloak using the Operator and how to configure Keycloak and OpenShift so that users can log in to OpenShift using OpenID. I have to admit that the exact steps are not easy to find, so I decided to write a blog post about it, describing each step in detail. This time I will not use GitOps, but the OpenShift and Keycloak Web Console to show the steps, because before we put it into GitOps, we need to understand what is actually happening.
This article tries to explain every step required so that a user can authenticate to OpenShift using Keycloak as an Identity Provider (IDP) and that Groups from Keycloak are imported into OpenShift. This article does not cover a production grade installation of Keycloak, but only a test installation, so you can see how it works. For production, you might want to consider a proper database (maybe external, but at least with a backup), high availability, etc.).
[Ep.13] ApplicationSet with Matrix Generator
During my day-to-day business, I am discussing the following setup with many customers: Configure App-of-Apps. Here I try to explain how I use an ApplicationSet that watches over a folder in Git and automatically adds a new Argo CD Application whenever a new folder is found. This works great, but there is a catch: The ApplicationSet uses the same Namespace default for all Applications. This is not always desired, especially when you have different teams working on different Applications.
Recently I was asked by the customer if this can be fixed and if it is possible to define different Namespaces for each Application. The answer is yes, and I would like to show you how to do this.
Introducing AdminNetworkPolicies
Classic Kubernetes/OpenShift offer a feature called NetworkPolicy that allows users to control the traffic to and from their assigned Namespace. NetworkPolicies are designed to give project owners or tenants the ability to protect their own namespace. Sometimes, however, I worked with customers where the cluster administrators or a dedicated (network) team need to enforce these policies.
Since the NetworkPolicy API is namespace-scoped, it is not possible to enforce policies across namespaces. The only solution was to create custom (project) admin and edit roles, and remove the ability of creating, modifying or deleting NetworkPolicy objects. Technically, this is possible and easily done. But shifts the whole network security to cluster administrators.
Luckily, this is where AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) comes into play.
[Ep.12] Using Kustomize Post-Renderer
Lately I came across several issues where a given Helm Chart must be modified after it has been rendered by Argo CD. Argo CD does a helm template to render a Chart. Sometimes, especially when you work with Subcharts or when a specific setting is not yet supported by the Chart, you need to modify it later … you need to post-render the Chart.
In this very short article, I would like to demonstrate this on a real-live example I had to do. I would like to inject annotations to a Route objects, so that the certificate can be injected. This is done by the cert-utils operator. For the post-rendering the Argo CD repo pod will be extended with a sidecar container, that is watching for the repos and patches them if required.
[Ep.11] Managing Certificates
The article SSL Certificate Management for OpenShift on AWS explains how to use the Cert-Manager Operator to request and install a new SSL Certificate. This time, I would like to leverage the GitOps approach using the Helm Chart cert-manager I have prepared to deploy the Operator and order new Certificates.
I will use an ACME Letsencrypt issuer with a DNS challenge. My domain is hosted at AWS Route 53.
However, any other integration can be easily used.
[Ep.10] Update Cluster Version
During a GitOps journey at one point, the question arises, how to update a cluster? Nowadays it is very easy to update a cluster using CLI or WebUI, so why bother with GitOps in that case? The reason is simple: Using GitOps you can be sure that all clusters are updated to the correct, required version and the version of each cluster is also managed in Git.
All you need is the channel you want to use and the desired cluster version. Optionally, you can define the exact image SHA. This might be required when you are operating in a restricted environment.
[Ep.9] Multiple Sources for Applications
Argo CD or OpenShift GitOps uses Applications or ApplicationSets to define the relationship between a source (Git) and a cluster. Typically, this is a 1:1 link, which means one Application is using one source to compare the cluster status. This can be a limitation. For example, if you are working with Helm Charts and a Helm repository, you do not want to re-build (or re-release) the whole chart just because you made a small change in the values file that is packaged into the repository. You want to separate the configuration of the chart with the Helm package.
The most common scenarios for multiple sources are (see: Argo CD documentation):
Your organization wants to use an external/public Helm chart
You want to override the Helm values with your own local values
You don’t want to clone the Helm chart locally as well because that would lead to duplication and you would need to monitor it manually for upstream changes.
This small article describes three different ways with a working example and tries to cover the advantages and disadvantages of each of them. They might be opinionated but some of them proved to be easier to use and manage.
[Ep.8] Installing OpenShift Logging
OpenShift Logging is one of the more complex things to install and configure on an OpenShift cluster. Not because the service or Operators are so complex to understand, but because of the dependencies logging has. Besides the logging operator itself, the Loki operator is required, the Loki operator requires access to an object storage, that might be configured or is already available.
In this article, I would like to demonstrate the configuration of the full stack using an object storage from OpenShift Data Foundation. This means:
Installing the logging operator into the namespace openshift-logging
Installing the Loki operator into the namespace openshift-operators-redhat
Creating a new BackingStore and BucketClass
Generating the Secret for Loki to authenticate against the object storage
Configuring the LokiStack resource
Configuring the ClusterLogging resource
All steps will be done automatically. In case you have S3 storage available, or you are not using OpenShift Data Foundation, the setup will be a bit different. For example, you do not need to create a BackingStore or the Loki authentication Secret.
[Ep.7] Configure Buckets in MinIO
MinIO is a simple, S3-compatible object storage, built for high-performance and large-scale environments. It can be installed as an Operator to Openshift. In addition, to a command line tool, it provides a WebUI where all settings can be done, especially creating and configuring new buckets. Currently, this is not possible in a declarative GitOps-friendly way. Therefore, I created the Helm chart minio configurator, that will start a Kubernetes Job, which will take care of the configuration.
Honestly, when I say I have created it, the truth is, that it is based on an existing MinIO Chart by Bitnami, that does much more than just set up a bucket. I took out the bucket configuration part, streamlined it a bit and added some new features, which I required.
This article shall explain how to achieve this.
Copyright © 2020 - 2026 Toni Schmidbauer & Thomas Jungbauer