Secret Management
SSL Certificate Management for OpenShift on AWS
Finally, after a long time on my backlog, I had some time to look into the Cert-Manager Operator and use this Operator to automatically issue new SSL certificates. This article shall show step-by-step how to create a certificate request and use this certificate for a Route and access a service via your Browser. I will focus on the technical part, using a given domain on AWS Route53.
Using ServerSideApply with ArgoCD
„If it is not in GitOps, it does not exist“ - However, managing objects partially only by Gitops was always an issue, since ArgoCD would like to manage the whole object. For example, when you tried to work with node labels and would like to manage them via Gitops, you would need to put the whole node object into ArgoCD. This is impractical since the node object is very complex and typically managed by the cluster. There were 3rd party solutions (like the patch operator), that helped with this issue.
However, with the Kubernetes feature Server-Side Apply this problem is solved. Read further to see a working example of this feature.
Secrets Management - Vault on OpenShift
Sensitive information in OpenShift or Kubernetes is stored as a so-called Secret. The management of these Secrets is one of the most important questions, when it comes to OpenShift. Secrets in Kubernetes are encoded in base64. This is not an encryption format. Even if etcd is encrypted at rest, everybody can decode a given base64 string which is stored in the Secret.
For example: The string Thomas encoded as base64 is VGhvbWFzCg==. This is simply a masked plain text and it is not secure to share these values, especially not on Git.
To make your CI/CD pipelines or Gitops process secure, you need to think of a secure way to manage your Secrets. Thus, your Secret objects must be encrypted somehow. HashiCorp Vault is one option to achieve this requirement.
Automated ETCD Backup
Securing ETCD is one of the major Day-2 tasks for a Kubernetes cluster. This article will explain how to create a backup using OpenShift Cronjob.
Working with Environments
Imagine you have one OpenShift cluster and you would like to create 2 or more environments inside this cluster, but also separate them and force the environments to specific nodes, or use specific inbound routers. All this can be achieved using labels, IngressControllers and so on. The following article will guide you to set up dedicated compute nodes for infrastructure, development and test environments as well as the creation of IngressController which are bound to the appropriate nodes.
Advanced Cluster Security - Authentication
Red Hat Advanced Cluster Security (RHACS) Central is installed with one administrator user by default. Typically, customers request an integration with existing Identity Provider(s) (IDP). RHACS offers different options for such integration. In this article 2 IDPs will be configured as an example. First OpenShift Auth and second Red Hat Single Sign On (RHSSO) based on Keycloak
Ansible Style Guide
You should always follow the Best Practices and Ansible Lint rules defined by the Ansible documentation when developing playbooks.
Although very basic, the Best Practices document gives a few guidelines to be able to carry out well-structured playbooks and roles, it contains recommendations that evolve with the project, so it is recommended to review it regularly. It is advisable to review the organization of content in Ansible.
The Ansible Lint documentation shows us through this tool the syntax rules that will be checked in the testing of roles and playbooks, the rules that will be checked are indicated in this document in their respective section.
Automation Controller and LDAP Authentication
The following article shall quickly, without huge background information, deploy an Identity Management Server (based on FreeIPA) and connect this IDM to an existing Automation Controller so authentication can be tested and verified based on LDAP.
Secure your secrets with Sealed Secrets
Working with a GitOps approach is a good way to keep all configurations and settings versioned and in sync on Git. Sensitive data, such as passwords to a database connection, will quickly come around. Obviously, it is not a idea to store clear text strings in a, maybe even public, Git repository. Therefore, all sensitive information should be stored in a secret object. The problem with secrets in Kubernetes is that they are actually not encrypted. Instead, strings are base64 encoded which can be decoded as well. Thats not good … it should not be possible to decrypt secured data. Sealed Secret will help here…
Using Descheduler
Descheduler is a new feature which is GA since OpenShift 4.7. It can be used to evict pods from nodes based on specific strategies. The evicted pod is then scheduled on another node (by the Scheduler) which is more suitable.
This feature can be used when:
nodes are under/over-utilized
pod or node affinity, taints or labels have changed and are no longer valid for a running pod
node failures
pods have been restarted too many times
Topology Spread Constraints
Topology spread constraints is a new feature since Kubernetes 1.19 (OpenShift 4.6) and another way to control where pods shall be started. It allows to use failure-domains, like zones or regions or to define custom topology domains. It heavily relies on configured node labels, which are used to define topology domains. This feature is a more granular approach than affinity, allowing to achieve higher availability.
Taints and Tolerations
While Node Affinity is a property of pods that attracts them to a set of nodes, taints are the exact opposite. Nodes can be configured with one or more taints, which mark the node in a way to only accept pods that do tolerate the taints. The tolerations themselves are applied to pods, telling the scheduler to accept a taints and start the workload on a tainted node.
A common use case would be to mark certain nodes as infrastructure nodes, where only specific pods are allowed to be executed or to taint nodes with a special hardware (i.e. GPU).
Pod Affinity/Anti-Affinity
While noteSelector provides a very easy way to control where a pod shall be scheduled, the affinity/anti-affinity feature, expands this configuration with more expressive rules like logical AND operators, constraints against labels on other pods or soft rules instead of hard requirements.
The feature comes with two types:
pod affinity/anti-affinity - allows constrains against other pod labels rather than node labels.
node affinity - allows pods to specify a group of nodes they can be placed on
NodeSelector
One of the easiest ways to tell your Kubernetes cluster where to put certain pods is to use a nodeSelector specification. A nodeSelector defines a key-value pair and are defined inside the specification of the pods and as a label on one or multiple nodes (or machine set or machine config). Only if selector matches the node label, the pod is allowed to be scheduled on that node.
Copyright © 2020 - 2026 Toni Schmidbauer & Thomas Jungbauer