While playing around with Falco (worth another post) I had to force a
MachineConfig update even so the actual configuration of the machine
did not change.
This posts documents the steps taken.
As this seems to be not clearly documented here it comes
We want to force the rollout of a worker node, so remember the name of an old worker config, in our case rendered-worker-5baefb5bb7ad1d69cd7a0c3dc52ef2f3
Currently the desiredConfig and the currentConfig should have the same value
$oc get node node1 -o jsonpath='{.metadata.annotations.machineconfiguration\.openshift\.io/desiredConfig}{"\n"}'rendered-worker-a0f8f0d915ef01ba4a1ab3047b6c863d
$ oc get node node1 -o jsonpath='{.metadata.annotations.machineconfiguration\.openshift\.io/currentConfig}{"\n"}'rendered-worker-a0f8f0d915ef01ba4a1ab3047b6c863d
Touch a file called 4 touch /run/machine-config-daemon-force
patch the node and set the annotation machineconfiguration.openshift.io/currentConfig to the old rendered config rendered-worker-5baefb5bb7ad1d69cd7a0c3dc52ef2f3
MinIO is a simple, S3-compatible object storage, built for high-performance and large-scale environments. It can be installed as an Operator to Openshift. In addition, to a command line tool, it provides a WebUI where all settings can be done, especially creating and configuring new buckets. Currently, this is not possible in a declarative GitOps-friendly way. Therefore, I created the Helm chart minio configurator, that will start a Kubernetes Job, which will take care of the configuration.
Honestly, when I say I have created it, the truth is, that it is based on an existing MinIO Chart by Bitnami, that does much more than just set up a bucket. I took out the bucket configuration part, streamlined it a bit and added some new features, which I required.
This article shall explain how to achieve this.
Today I want to demonstrate the deployment and configuration of Advanced Cluster Security (ACS) using a GitOps approach. The required operator shall be installed, verified if it is running and then ACS shall be initialized. This initialization contains the deployment of several components:
Central - as UI and as a main component of ACS
SecuredClusters - installs a Scanner, Controller pods etc.
Console link into OpenShift UI - to directly access the ACS Central UI
Job to create an initialization bundle to install the Secured Cluster
Job to configure authentication using OpenShift
Let’s start …
In the previous articles, we have discussed the Git repository folder structure and the configuration of the App-Of-Apps. Now it is time to deploy our first configuration. One of the first things I usually deploy is the Compliance Operator. This Operator is recommended for any cluster and can be deployed without any addition to the Subscription.
In this article, I will describe how it is installed and how the Helm Chart is configured.
In the article Install GitOps to the cluster OpenShift GitOps is deployed using a shell script. This should be the very first installation and the only deployment that is done manually on a cluster. This procedure automatically installs the so-called App-of-Apps named Argo CD Resources Manager which is responsible for all further Argo CD Applications and ApplicationSets. No other configuration should be done manually if possible.
This article will demonstrate how to configure the App-of-Apps in an easy and declarative way, using ApplicationSet mainly.
Data retention or lifecycle configuration for S3 buckets is done by the S3 provider directly. The provider keeps track and files are automatically rotated after the requested time.
This article is a simple step-by-step guide to configure such lifecycle for OpenShift Data Foundation (ODF), where buckets are provided by Noobaa. Knowledge about ODF is assumed, however similar steps can be reproduced for any S3-compliant storage operator.