„If it is not in GitOps, it does not exist“ - is a mantra I hear quite often and also try to practice at customer engagements. The idea is to have Git as the only source of truth on what happens inside the environment. That said, Everything as Code is a practice that treats every aspect of the system as a code. Storing this code in Git provides a shared understanding, traceability and repeatability of changes.
While there are many articles about how to get GitOps into the deployment process of applications, this one rather sets the focus on the cluster configuration and tasks system administrators usually have to do.
It all begins with an OpenShift cluster. Such a cluster must be installed and while we will not discuss a bootstrap of the whole cluster … yes, it is possible to even automate the cluster deployment using Advanced Cluster Management as an example, we will simply assume that one cluster is up and running.
For our setup, an OpenShift cluster 4.14 is deployed and we will use the repository OpenShift Cluster Configuration using GitOps to deploy our configuration onto this cluster. This repository shall act as the source of truth for any configuration. In the article Choosing the right Git repository structure I have explained the folder structure I am usually using. As tool I am usually using Helm Charts.
The openshift-clusterconfig-gitops repository heavily uses the Helm Repository found at https://charts.stderr.at/
Deploy OpenShift-GitOps
The first thing we need to do is to deploy OpenShift-GitOps, which is based on the Argo CD project. OpenShift-GitOps comes as an Operator and is available to all OpenShift customers. The Operator will deploy and configure Argo CD and provide several custom resources to configure Argo CD Applications or ApplicationSets for example.
To automate the operator deployment the following shell script can be used: init_GitOps.sh.
This Shell script is the only script that is executed manually. It installs and configures Argo CD. Any other operation on the cluster must then be done using GitOps processes. I am using this to quickly install a new Demo-cluster. There are alternatives and maybe better way, but for my purpose it works pretty well.
Be sure that you are logged in the the required cluster
oc whoami --show-server
Execute the init_GitOps.sh
./init_GitOps.sh
The script will deploy the operator and configure/patch the Argo CD instance. In addition, it will create the so-called Application of Applications, which acts as an umbrella Application, that automatically creates all other Argo CD Application(Sets).
For now, the App of Apps is the only Argo CD Application that automatically synchronizes all changes found in Git. This is for security, purposes so you can test the cluster configuration one after another.
Of course, it is up to you if you want to use the shell script. The Operator can also be installed manually, using Advanced Cluster Manager, or using Platform Operators and installing the Operating during the cluster installation (However, this feature is currently (v4.15) TechPreview)
What will this script do?
I will not de-assemble the script line by line, but in general, the following will happen:
This FIRST OpenShift-GitOps will be deployed with cluster-admin privileges since we want to manage the whole cluster configuration. This Argo CD instance should not be used for application deployment. For that, deploy additional instances of GitOps.
Waiting for Deployments to become ready
Deploy the Application of Applications that is responsible for automatically deploying a set of Applications or ApplicationSets (see [The Argo CD Object Manager Application])
❯ ./init_GitOps.sh
Starting Deployment
Deploying OpenShift GitOps Operator
Adding Helm Repo https://charts.stderr.at/
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/tjungbau/openshift-aws/aws/auth/kubeconfig
"tjungbauer" has been added to your repositories
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/tjungbau/openshift-aws/aws/auth/kubeconfig
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "sealed-secrets" chart repository
...Successfully got an update from the "tjungbauer" chart repository
...Successfully got an update from the "apache-airflow" chart repository
...Successfully got an update from the "hashicorp" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/tjungbau/openshift-aws/aws/auth/kubeconfig
Release "openshift-gitops-operator" has been upgraded. Happy Helming!
NAME: openshift-gitops-operator
LAST DEPLOYED: Mon Sep 26 13:22:33 2022
NAMESPACE: openshift-operators
STATUS: deployed
REVISION: 2
TEST SUITE: None
Give the gitops-operator some time to be installed. Waiting for 45 seconds...
Waiting for operator to start. Chcking every 10 seconds.
NAME READY UP-TO-DATE AVAILABLE AGE
gitops-operator-controller-manager 1/1 1 1 4d4h
Waiting for openshift-gitops namespace to be created. Checking every 10 seconds.
NAME STATUS AGE
openshift-gitops Active 4d4h
Waiting for deployments to start. Checking every 10 seconds.
NAME READY UP-TO-DATE AVAILABLE AGE
cluster 1/1 1 1 4d4h
Waiting for all pods to be created
Waiting for deployment cluster
deployment "cluster" successfully rolled out
Waiting for deployment kam
deployment "kam" successfully rolled out
Waiting for deployment openshift-gitops-applicationset-controller
deployment "openshift-gitops-applicationset-controller" successfully rolled out
Waiting for deployment openshift-gitops-redis
deployment "openshift-gitops-redis" successfully rolled out
Waiting for deployment openshift-gitops-repo-server
deployment "openshift-gitops-repo-server" successfully rolled out
Waiting for deployment openshift-gitops-server
deployment "openshift-gitops-server" successfully rolled out
GitOps Operator ready
Lets use our patched Argo CD CRD
argocd.argoproj.io/openshift-gitops unchanged
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-0 unchanged
Waiting for deployment cluster
deployment "cluster" successfully rolled out
Waiting for deployment kam
deployment "kam" successfully rolled out
Waiting for deployment openshift-gitops-applicationset-controller
deployment "openshift-gitops-applicationset-controller" successfully rolled out
Waiting for deployment openshift-gitops-redis
deployment "openshift-gitops-redis" successfully rolled out
Waiting for deployment openshift-gitops-repo-server
deployment "openshift-gitops-repo-server" successfully rolled out
Waiting for deployment openshift-gitops-server
deployment "openshift-gitops-server" successfully rolled out
GitOps Operator ready... again
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/tjungbau/openshift-aws/aws/auth/kubeconfig
Release "app-of-apps" has been upgraded. Happy Helming!
NAME: app-of-apps
LAST DEPLOYED: Mon Sep 26 13:23:59 2022
NAMESPACE: openshift-gitops
STATUS: deployed
REVISION: 2
TEST SUITE: None
Logging into Argo CD
At this point, we have GitOps and the "App of Apps" deployed.
Argo CD comes with a WebUI and a command line tool. The latter must installed to your local environment. In this article, we will use the WebUI.
To access the WebUI use the applications menu of the top right corner in Openshift.
Figure 1. Argo CD: WebUI Link
Use the button "Login via OpenShift".
Figure 2. Argo CD: Authentication
The Argo CD Resources Manager Application
The Application of Applications (short App of Apps) is called Argo CD Resources Manager and it is the only Argo CD application that is deployed using the init script. This single Argo CD Application has the sole purpose of deploying other Argo CD objects, such as Applications, ApplicationSets and AppProjects.
Figure 3. Argo CD: App of Apps
It synchronizes everything that is found in the repository in the path:
base/argocd-resources-manager (main branch)
Whenever you would like to create a new Argo CD application(set) it is supposed to be done using this App-of-Apps or to be more exact: in the path mentioned above.
The App-of-Apps is the only Argo CD Application (at this moment) that has automatic synchronization enabled. Thus any changes in the App-of-Apps will be propagated automatically as soon as GitOps syncs with Git.
The current Applications or ApplicationSets that come with the bootstrap repository are for example:
Deployment of Advanced Cluster Security (RHACS)
Deployment of Advanced Cluster Management (RHACM)
Deployment of basic cluster configuration (i.e. etcd encryption, some UI tweaks …)
Deployment of Compliance Operator
and many more.
Check out the deployed Argo CD objects or the openshift-clusterconfig-gitops repository.
A deep dive into the argocd-resources-manager will be topic of a different episode of this serie.
Classic Kubernetes/OpenShift offer a feature called NetworkPolicy that allows users to control the traffic to and from their assigned Namespace. NetworkPolicies are designed to give project owners or tenants the ability to protect their own namespace. Sometimes, however, I worked with customers where the cluster administrators or a dedicated (network) team need to enforce these policies.
Since the NetworkPolicy API is namespace-scoped, it is not possible to enforce policies across namespaces. The only solution was to create custom (project) admin and edit roles, and remove the ability of creating, modifying or deleting NetworkPolicy objects. Technically, this is possible and easily done. But shifts the whole network security to cluster administrators.
Luckily, this is where AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) comes into play.
Lately I came across several issues where a given Helm Chart must be modified after it has been rendered by Argo CD. Argo CD does a helm template to render a Chart. Sometimes, especially when you work with Subcharts or when a specific setting is not yet supported by the Chart, you need to modify it later … you need to post-render the Chart.
In this very short article, I would like to demonstrate this on a real-live example I had to do. I would like to inject annotations to a Route objects, so that the certificate can be injected. This is done by the cert-utils operator. For the post-rendering the Argo CD repo pod will be extended with a sidecar container, that is watching for the repos and patches them if required.
The article SSL Certificate Management for OpenShift on AWS explains how to use the Cert-Manager Operator to request and install a new SSL Certificate. This time, I would like to leverage the GitOps approach using the Helm Chart cert-manager I have prepared to deploy the Operator and order new Certificates.
I will use an ACME Letsencrypt issuer with a DNS challenge. My domain is hosted at AWS Route 53.
However, any other integration can be easily used.
During a GitOps journey at one point, the question arises, how to update a cluster? Nowadays it is very easy to update a cluster using CLI or WebUI, so why bother with GitOps in that case? The reason is simple: Using GitOps you can be sure that all clusters are updated to the correct, required version and the version of each cluster is also managed in Git.
All you need is the channel you want to use and the desired cluster version. Optionally, you can define the exact image SHA. This might be required when you are operating in a restricted environment.
Argo CD or OpenShift GitOps uses Applications or ApplicationSets to define the relationship between a source (Git) and a cluster. Typically, this is a 1:1 link, which means one Application is using one source to compare the cluster status. This can be a limitation. For example, if you are working with Helm Charts and a Helm repository, you do not want to re-build (or re-release) the whole chart just because you made a small change in the values file that is packaged into the repository. You want to separate the configuration of the chart with the Helm package.
The most common scenarios for multiple sources are (see: Argo CD documentation):
Your organization wants to use an external/public Helm chart
You want to override the Helm values with your own local values
You don’t want to clone the Helm chart locally as well because that would lead to duplication and you would need to monitor it manually for upstream changes.
This small article describes three different ways with a working example and tries to cover the advantages and disadvantages of each of them. They might be opinionated but some of them proved to be easier to use and manage.