Welcome to Yet Another Useless Blog
Despite the name, we hope you'll find these articles genuinely helpful! π
Who are we?
We're Thomas Jungbauer and Toni Schmidbauer β two seasoned IT professionals with over 20 years of experience each. Currently, we work as architects at Red Hat Austria, helping customers design and implement OpenShift and Ansible solutions.
What's this blog about?
Real-world problems, practical solutions. We document issues we've encountered in the field along with step-by-step guides to reproduce and resolve them. Our goal: save you hours of frustrating documentation searches and trial-and-error testing.
Feel free to send us an e-mail or open a GitHub issue.
Recent Posts
[Ep.8] Installing OpenShift Logging
OpenShift Logging is one of the more complex things to install and configure on an OpenShift cluster. Not because the service or Operators are so complex to understand, but because of the dependencies logging has. Besides the logging operator itself, the Loki operator is required, the Loki operator requires access to an object storage, that might be configured or is already available.
In this article, I would like to demonstrate the configuration of the full stack using an object storage from OpenShift Data Foundation. This means:
Installing the logging operator into the namespace openshift-logging
Installing the Loki operator into the namespace openshift-operators-redhat
Creating a new BackingStore and BucketClass
Generating the Secret for Loki to authenticate against the object storage
Configuring the LokiStack resource
Configuring the ClusterLogging resource
All steps will be done automatically. In case you have S3 storage available, or you are not using OpenShift Data Foundation, the setup will be a bit different. For example, you do not need to create a BackingStore or the Loki authentication Secret.
[Ep.7] Configure Buckets in MinIO
MinIO is a simple, S3-compatible object storage, built for high-performance and large-scale environments. It can be installed as an Operator to Openshift. In addition, to a command line tool, it provides a WebUI where all settings can be done, especially creating and configuring new buckets. Currently, this is not possible in a declarative GitOps-friendly way. Therefore, I created the Helm chart minio configurator, that will start a Kubernetes Job, which will take care of the configuration.
Honestly, when I say I have created it, the truth is, that it is based on an existing MinIO Chart by Bitnami, that does much more than just set up a bucket. I took out the bucket configuration part, streamlined it a bit and added some new features, which I required.
This article shall explain how to achieve this.
[Ep.6] Setup & Configure Advanced Cluster Security
Today I want to demonstrate the deployment and configuration of Advanced Cluster Security (ACS) using a GitOps approach. The required operator shall be installed, verified if it is running and then ACS shall be initialized. This initialization contains the deployment of several components:
Central - as UI and as a main component of ACS
SecuredClusters - installs a Scanner, Controller pods etc.
Console link into OpenShift UI - to directly access the ACS Central UI
Job to create an initialization bundle to install the Secured Cluster
Job to configure authentication using OpenShift
Letβs start β¦β
[Ep.5] Setup & Configure Compliance Operator
In the previous articles, we have discussed the Git repository folder structure and the configuration of the App-Of-Apps. Now it is time to deploy our first configuration. One of the first things I usually deploy is the Compliance Operator. This Operator is recommended for any cluster and can be deployed without any addition to the Subscription.
In this article, I will describe how it is installed and how the Helm Chart is configured.
[Ep.4] Configure App-of-Apps
In the article Install GitOps to the cluster OpenShift GitOps is deployed using a shell script. This should be the very first installation and the only deployment that is done manually on a cluster. This procedure automatically installs the so-called App-of-Apps named Argo CD Resources Manager which is responsible for all further Argo CD Applications and ApplicationSets. No other configuration should be done manually if possible.
This article will demonstrate how to configure the App-of-Apps in an easy and declarative way, using ApplicationSet mainly.
OpenShift Data Foundation - Noobaa Bucket Data Retention (Lifecycle)
Data retention or lifecycle configuration for S3 buckets is done by the S3 provider directly. The provider keeps track and files are automatically rotated after the requested time.
This article is a simple step-by-step guide to configure such lifecycle for OpenShift Data Foundation (ODF), where buckets are provided by Noobaa. Knowledge about ODF is assumed, however similar steps can be reproduced for any S3-compliant storage operator.
[Ep.3] Setup OpenShift GitOps/Argo CD
βIf it is not in GitOps, it does not existβ - is a mantra I hear quite often and also try to practice at customer engagements. The idea is to have Git as the only source of truth on what happens inside the environment. That said, Everything as Code is a practice that treats every aspect of the system as a code. Storing this code in Git provides a shared understanding, traceability and repeatability of changes.
While there are many articles about how to get GitOps into the deployment process of applications, this one rather sets the focus on the cluster configuration and tasks system administrators usually have to do.
[Ep.2] Choosing the right Git repository structure
One of the most popular questions asked before adopting the GitOps approach is how to deploy an application to different environments (Test, Dev, Production, etc.) in a safe and repeatable way.
Each organisation has different requirements, and the choice will depend on a multitude of factors that also include non-technical aspects.
Therefore, it is important to state: "There is no unique βrightβ way, there are common practices".
Copyright © 2020 - 2025 Toni Schmidbauer & Thomas Jungbauer