Welcome to Yet Another Useless Blog
Despite the name, we hope you'll find these articles genuinely helpful! π
Who are we?
We're Thomas Jungbauer and Toni Schmidbauer β two seasoned IT professionals with over 20 years of experience each. Currently, we work as architects at Red Hat Austria, helping customers design and implement OpenShift and Ansible solutions.
What's this blog about?
Real-world problems, practical solutions. We document issues we've encountered in the field along with step-by-step guides to reproduce and resolve them. Our goal: save you hours of frustrating documentation searches and trial-and-error testing.
Feel free to send us an e-mail or open a GitHub issue.
Recent Posts
Secrets Management - Vault on OpenShift
Sensitive information in OpenShift or Kubernetes is stored as a so-called Secret. The management of these Secrets is one of the most important questions, when it comes to OpenShift. Secrets in Kubernetes are encoded in base64. This is not an encryption format. Even if etcd is encrypted at rest, everybody can decode a given base64 string which is stored in the Secret.
For example: The string Thomas encoded as base64 is VGhvbWFzCg==. This is simply a masked plain text and it is not secure to share these values, especially not on Git.
To make your CI/CD pipelines or Gitops process secure, you need to think of a secure way to manage your Secrets. Thus, your Secret objects must be encrypted somehow. HashiCorp Vault is one option to achieve this requirement.
Overview of Red Hat's Multi Cloud Gateway (Noobaa)
This is my personal summary of experimenting with Red Hat's Multi Cloud Gateway (MCG) based on the upstream Noobaa project. MCG is part of Red Hat's OpenShift Data Foundation (ODF). ODF bundles the upstream projects Ceph and Noobaa.
Overview
Noobaa, or the Multicloud Gateway (MCG), is a S3 based data federation tool. It allows you to use S3 backends from various sources and
- sync
- replicate
- or simply use existing
S3 buckets. Currently the following sources, or backing stores are supported:
Adventures in Java Land: JPA disconnected entities
An old man tries to refresh his Java skills and does DO378. He fails spectacularly at the first real example but learns a lot on the way.
Automated ETCD Backup
Securing ETCD is one of the major Day-2 tasks for a Kubernetes cluster. This article will explain how to create a backup using OpenShift Cronjob.
Working with Environments
Imagine you have one OpenShift cluster and you would like to create 2 or more environments inside this cluster, but also separate them and force the environments to specific nodes, or use specific inbound routers. All this can be achieved using labels, IngressControllers and so on. The following article will guide you to set up dedicated compute nodes for infrastructure, development and test environments as well as the creation of IngressController which are bound to the appropriate nodes.
Advanced Cluster Security - Authentication
Red Hat Advanced Cluster Security (RHACS) Central is installed with one administrator user by default. Typically, customers request an integration with existing Identity Provider(s) (IDP). RHACS offers different options for such integration. In this article 2 IDPs will be configured as an example. First OpenShift Auth and second Red Hat Single Sign On (RHSSO) based on Keycloak
Ansible Style Guide
You should always follow the Best Practices and Ansible Lint rules defined by the Ansible documentation when developing playbooks.
Although very basic, the Best Practices document gives a few guidelines to be able to carry out well-structured playbooks and roles, it contains recommendations that evolve with the project, so it is recommended to review it regularly. It is advisable to review the organization of content in Ansible.
The Ansible Lint documentation shows us through this tool the syntax rules that will be checked in the testing of roles and playbooks, the rules that will be checked are indicated in this document in their respective section.
Stumbling into Azure Part II: Setting up a private ARO cluster
In Part I of our blog post we covered setting up required resources in Azure. Now we are finally going to set up a private cluster. Private
As review from Part I here is our planned setup, this time including the ARO cluster.
Automation Controller and LDAP Authentication
The following article shall quickly, without huge background information, deploy an Identity Management Server (based on FreeIPA) and connect this IDM to an existing Automation Controller so authentication can be tested and verified based on LDAP.
Stumbling into Azure Part I: Building a site-to-site VPN tunnel for testing
So we want to play with ARO (Azure Red Hat OpenShift) private clusters. A private cluster is not reachable from the internet (surprise) and is only reachable via a VPN tunnel from other networks.
This blog post describes how we created a site-to-site VPN between a Hetzner dedicated server running multiple VM's via libvirt and Azure.
An upcoming blog post is going to cover the setup of the private ARO cluster.
Stumbling into Quay: Upgrading from 3.3 to 3.4 with the quay-operator
We had the task of answering various questions related to upgrading Red Hat Quay 3.3 to 3.4 and to 3.5 with the help of the quay-operator.
Thankfully (sic!) everything changed in regards to the Quay operator between Quay 3.3 and Quay 3.4.
So this is a brain dump of the things to consider.
Operator changes
With Quay 3.4 the operator was completely reworked and it basically changed from opinionated to very opinionated. The upgrade works quite well but you have to be aware about the following points:
Secure your secrets with Sealed Secrets
Working with a GitOps approach is a good way to keep all configurations and settings versioned and in sync on Git. Sensitive data, such as passwords to a database connection, will quickly come around. Obviously, it is not a idea to store clear text strings in a, maybe even public, Git repository. Therefore, all sensitive information should be stored in a secret object. The problem with secrets in Kubernetes is that they are actually not encrypted. Instead, strings are base64 encoded which can be decoded as well. Thats not good β¦β it should not be possible to decrypt secured data. Sealed Secret will help hereβ¦β
Introduction
Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. It is probably one of the most important tasks for a Day-2 scenario and should be considered at a very early stage for a new cluster. OpenShift/Kubernetes is already shipped with a default scheduler which schedules pods as they get created accross the cluster, without any manual steps.
However, there are scenarios where a more advanced approach is required, like for example using a specifc group of nodes for dedicated workload or make sure that certain applications do not run on the same nodes. Kubernetes provides different options:
Controlling placement with node selectors
Controlling placement with pod/node affinity/anti-affinity rules
Controlling placement with taints and tolerations
Controlling placement with topology spread constraints
This series will try to go into the detail of the different options and explains in simple examples how to work with pod placement rules. It is not a replacement for any official documentation, so always check out Kubernetes and or OpenShift documentations.
Node Affinity
Node Affinity allows to place a pod to a specific group of nodes. For example, it is possible to run a pod only on nodes with a specific CPU or disktype. The disktype was used as an example for the nodeSelector and yes β¦β Node Affinity is conceptually similar to nodeSelector but allows a more granular configuration.
Copyright © 2020 - 2026 Toni Schmidbauer & Thomas Jungbauer