<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>YAUB Yet Another Useless Blog on TechBlog about OpenShift/Ansible/Satellite and much more</title><link>https://blog.stderr.at/</link><description>TechBlog about OpenShift/Ansible/Satellite and much more</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><copyright>Toni Schmidbauer &amp; Thomas Jungbauer</copyright><atom:link href="https://blog.stderr.at/index.xml" rel="self" type="application/rss+xml"/><item><title>Onboarding to Ansible Automation Platform with Configuration as Code</title><link>https://blog.stderr.at/ansible/2026/03/onboarding-to-ansible-automation-platform-with-configuration-as-code/</link><pubDate>Wed, 04 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/ansible/2026/03/onboarding-to-ansible-automation-platform-with-configuration-as-code/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;We had the honor of presenting at the 2nd Ansible Anwendertreffen in
Austria. The topic was the onboarding of application teams to the
Ansible Automation Platform.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We created an extensive demo that:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Onboards a new tenant into an AAP organization.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Each tenant gets a configuration as code repository, &lt;a href="https://github.com/tosmi-ansible/template-org-config" target="_blank" rel="noopener"&gt;cloned from a template&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A push event to the config-as-code repository triggers an update of AAP objects.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provides an &lt;a href="https://github.com/tosmi-ansible/template-example-project" target="_blank" rel="noopener"&gt;example repository&lt;/a&gt; for each tenant to get started.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The example repository provides a &lt;a href="https://github.com/tosmi-ansible/template-org-config/blob/main/playbooks/devenv.yaml" target="_blank" rel="noopener"&gt;webhook&lt;/a&gt; that creates an OpenShift Virtualization VM for testing code changes when a feature branch is created.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Contains a &lt;a href="https://github.com/tosmi-ansible/template-org-config/blob/main/playbooks/cac-diff.yaml" target="_blank" rel="noopener"&gt;playbook&lt;/a&gt; to display objects that are not currently managed by the tenant’s configuration-as-code repository.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The source code for the demo is available on &lt;a href="https://github.com/tosmi-ansible/aap-onboarding" target="_blank" rel="noopener"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;a href="https://github.com/tosmi-ansible/aap-onboarding/blob/main/README.md"&gt;README&lt;/a&gt; contains more details about the implementation, and the slide deck is also &lt;a href="https://github.com/tosmi-ansible/aap-onboarding/blob/main/docs/Ansible%20Anwender%20Treffen%20202602%20-%20Slides.pdf" target="_blank" rel="noopener"&gt;available on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;</description></item><item><title>OKD Single node installation in a disconnected environment</title><link>https://blog.stderr.at/openshift-platform/infrastructure/2026-02-09-okd-sno-disconnected/</link><pubDate>Mon, 09 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/infrastructure/2026-02-09-okd-sno-disconnected/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;Because of reasons we had to set up a disconnected single node OKD &amp;#34;cluster&amp;#34;. This is our brain dump as the
OKD documentation sometimes refers to OpenShift image locations, we had to read multiple sections to get it working and we
also consulted the OpenShift documentation at &lt;a href="https://docs.redhat.com" class="bare"&gt;https://docs.redhat.com&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Updated on 2026-03-03: Fixed install-config.yaml and create manifest command
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First we need to download the &lt;em&gt;oc&lt;/em&gt; command in the version we would like to install. OKD provides it on its release page:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;a href="https://github.com/okd-project/okd/releases/download/4.21.0-okd-scos.3/openshift-client-linux-amd64-rhel9-4.21.0-okd-scos.3.tar.gz" class="bare"&gt;https://github.com/okd-project/okd/releases/download/4.21.0-okd-scos.3/openshift-client-linux-amd64-rhel9-4.21.0-okd-scos.3.tar.gz&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To get an overview of releases available for OKD see &lt;a href="https://amd64.origin.releases.ci.openshift.org/" class="bare"&gt;https://amd64.origin.releases.ci.openshift.org/&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Another option is to extract the &lt;em&gt;oc&lt;/em&gt; command from the release image, if you have a version of &lt;em&gt;oc&lt;/em&gt; already installed:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;oc adm release extract --tools quay.io/okd/scos-release:4.21.0-okd-scos.3&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will also extract the &lt;em&gt;openshift-install&lt;/em&gt; tar.gz which we need later to generate the installation manifests and ignition configs.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Next we need the &lt;em&gt;oc-mirror&lt;/em&gt; plugin, which is used to mirror the release content to our local registry. We were only able to find the plugin on &lt;em&gt;console.redhat.com&lt;/em&gt;. Another option might be to compile the plugin from source code (&lt;a href="https://github.com/openshift/oc-mirror/" class="bare"&gt;https://github.com/openshift/oc-mirror/&lt;/a&gt;).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We copied the &lt;em&gt;oc&lt;/em&gt; command and the &lt;em&gt;oc-mirror&lt;/em&gt; plugin to /usr/local/bin/ and made them executable. After that we ran &lt;em&gt;oc mirror --v2 --help&lt;/em&gt; to test the installation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The upstream &lt;a href="https://github.com/openshift/oc-mirror/blob/main/docs/okd-mirror.md"&gt;oc-mirror&lt;/a&gt; plugin documentation was also helpful.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As our private registry required authentication, we created a .dockerconfigjson file with the registry credentials:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{
&amp;#34;auths&amp;#34;: {
&amp;#34;internal.registry&amp;#34;: {
&amp;#34;auth&amp;#34;: &amp;#34;&amp;lt;credentials base64 encoded&amp;gt;&amp;#34;,
&amp;#34;email&amp;#34;: &amp;#34;you@example.com&amp;#34;
}
}
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can encode the credentials with:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;echo -n &amp;#34;username:password&amp;#34; | base64 -w0&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For mirroring all required OKD images to our private registry we created an &lt;em&gt;ImageSetConfiguration&lt;/em&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v2alpha1
mirror:
platform:
channels:
- name: 4-scos-stable
minVersion: 4.21.0-okd-scos.3
maxVersion: 4.21.0-okd-scos.3
graph: false
operators:
- registry: quay.io/okderators/catalog-index:testing-4.20 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
packages:
- name: aws-load-balancer-operator
- name: 3scale-operator
- name: node-observability-operator
additionalImages:
- name: registry.redhat.io/ubi8/ubi:latest
- name: registry.redhat.io/ubi9/ubi@sha256:20f695d2a91352d4eaa25107535126727b5945bff38ed36a3e59590f495046f0&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We used a minimal configuration without operators and additional images, because downloading additional images did not work for us:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v2alpha1
mirror:
platform:
channels:
- name: 4-scos-stable &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
type: okd
minVersion: 4.21.0-okd-scos.3 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
maxVersion: 4.21.0-okd-scos.3
graph: false
operators: []
additionalImages: []&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The hard part for us was finding the right values for &lt;em&gt;channels&lt;/em&gt; and
the min/max versions. Once again the OKD
&lt;a href="https://amd64.origin.releases.ci.openshift.org/"&gt;release status page&lt;/a&gt;
was helpful.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Our private registry is based on &lt;a href="https://goharbor.io"&gt;Harbor&lt;/a&gt; for our
experiments, but any Docker v2 compatible registry should work. The
only thing required is a project within Harbor called &lt;em&gt;openshift&lt;/em&gt; and
a user/password for pulling/pushing images to this project. The
project name might be configurable with &lt;em&gt;oc mirror&lt;/em&gt;, but this requires
further investigation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;em&gt;oc mirror&lt;/em&gt; wants to verify signatures so we had to download a public
key and provide the &lt;em&gt;OCP_SIGNATURE_URL&lt;/em&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;curl -LO https://raw.githubusercontent.com/openshift/cluster-update-keys/master/keys/verifier-public-key-openshift-ci-4
export OCP_SIGNATURE_URL=&amp;#34;https://storage.googleapis.com/openshift-ci-release/releases/signatures/openshift/release/&amp;#34;
export OCP_SIGNATURE_VERIFICATION_PK=&amp;#34;verifier-public-key-openshift-ci-4&amp;#34;
# for debugging add --log-level debug, but we had intermittent errors with this option.
# after removing --log-level debug oc mirror ran without problems
oc mirror -c ImageSetConfiguration.yaml --authfile docker-auth.json --workspace file:///workspace docker://internal.registry --v2&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the &lt;em&gt;ImageDigestMirrorSource&lt;/em&gt; in &lt;em&gt;/workspace/working-dir/cluster-resources/idms-oc-mirror.yaml&lt;/em&gt; and &lt;em&gt;ImageTagMirrorSource&lt;/em&gt; custom resources in &lt;em&gt;/workspace/working-dir/cluster-resources/itms-oc-mirror.yaml&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The content of &lt;em&gt;ImageDigestMirrorSource&lt;/em&gt; will be reused in
the &lt;em&gt;install-config.yaml&lt;/em&gt;. This is required to redirect image pulls done by the node from &lt;em&gt;quay.io&lt;/em&gt; to our private registry.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Next we need DNS records for our cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;api.sno.internal&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;api-int.sno.internal&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;*.apps.sno.internal (wildcard DNS entry for applications deployed on the cluster)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we prepared the required &lt;em&gt;install-config.yaml&lt;/em&gt; file to trigger the &lt;em&gt;openshift-install&lt;/em&gt; command.
We basically followed &lt;a href="https://docs.okd.io/latest/installing/installing_sno/install-sno-installing-sno.html"&gt;Installing a Single Node OpenShift Cluster&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
baseDomain: internal
compute:
- hyperthreading: Enabled
name: worker
replicas: 0 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
metadata:
name: okd-sno
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
sshKey: &amp;#39;the key&amp;#39;
bootstrapInPlace: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
installationDisk: /dev/vda
pullSecret: |
{
&amp;#34;auths&amp;#34;: {
&amp;#34;registry.internal&amp;#34;: {
&amp;#34;auth&amp;#34;: &amp;#34;&amp;lt;username:password base64 encoded&amp;gt;&amp;#34;,
&amp;#34;email&amp;#34;: &amp;#34;some@email&amp;#34;
}
}
}
additionalTrustBundle: | &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
ImageDigestSources: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
- mirrors:
- registry.internal/openshift/release
source: quay.io/okd/scos-content
- mirrors:
- registry.internal/openshift/release-images
source: quay.io/okd/scos-release&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;We set the number of workers to zero (0). Because this is a SNO installation. Worker nodes could be added later, even for a SNO cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;We set the number of master nodes to one (1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;This tells the installer that we do not have a bootstrap node and it should create one large ignition config file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;If required the CA certificate of our registry. This is required if the registry uses an internal certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;ImageDigestSources configures CRIO to redirect requests to quay.io to our private registry.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;It’s time to create the installation manifests and ignition configs. We created a directory &lt;em&gt;install&lt;/em&gt;, copied the &lt;em&gt;install-config.yaml&lt;/em&gt; into this directory and triggered the &lt;em&gt;openshift-install&lt;/em&gt; command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;$ mkdir install
$ cp install-config.yaml install/
$ openshift-install --dir=install create single-node-ignition-config&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We are now ready to download the installation ISO and to embed our ignition config into it. We also want
to set kernel boot arguments to configure the network interface with a static IP address.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;curl -L $( ./openshift-install coreos print-stream-json |jq -r &amp;#34;.architectures.x86_64.artifacts.metal.formats.iso.disk.location&amp;#34; ) -o fhcos-live.iso&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The next step is to modify the ISO. We basically followed the instructions in the OKD documentation:
&lt;a href="https://docs.okd.io/4.14/installing/installing_sno/install-sno-installing-sno.html#generating-the-install-iso-manually_install-sno-installing-sno-with-the-assisted-installer" class="bare"&gt;https://docs.okd.io/4.14/installing/installing_sno/install-sno-installing-sno.html#generating-the-install-iso-manually_install-sno-installing-sno-with-the-assisted-installer&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-shell hljs" data-lang="shell"&gt;alias coreos-installer=&amp;#39;podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data -w /data quay.io/coreos/coreos-installer:release&amp;#39;
coreos-installer iso ignition embed -fi install/bootstrap-in-place-for-live-iso.ign fcos-live.iso
coreos-installer iso kargs modify --append &amp;#39;ip=10.0.0.99::10.0.0.1:255.255.255.0:sno.internal:ens33:none:10.0.0.255&amp;#39; fcos-live.iso&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We can take a look at the ISO bootstrap config and the kernel arguments with:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;coreos-installer iso ignition show fcos-live.iso
coreos-installer iso kargs show fcos-live.iso&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For understanding the kernel boot arguments see &lt;a href="https://access.redhat.com/solutions/5499911" class="bare"&gt;https://access.redhat.com/solutions/5499911&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The final step is to boot the modified ISO on the target machine and wait for the installation to complete.&lt;/p&gt;
&lt;/div&gt;</description></item><item><title>OpenShift Virtualization Networking - The Overview</title><link>https://blog.stderr.at/openshift-platform/virtualization/2025-12-29-openshift-virtualization-networking/</link><pubDate>Mon, 29 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/virtualization/2025-12-29-openshift-virtualization-networking/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;It’s time to dig into OpenShift Virtualization. You read that right, OpenShift Virtualization, based on &lt;strong&gt;kubevirt&lt;/strong&gt; allows you to run Virtual Machines on top of OpenShift, next to Pods.
If you come from a pure Kubernetes background, OpenShift Virtualization can feel like stumbling into a different dimension. In the world of Pods, we rarely care about Layer 2, MAC addresses, or VLANs. The SDN (Software Defined Network) handles the magic and we are happy.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;But Virtual Machines are different…​.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;They are needy creatures. They often refuse to live in the bubble of the default Pod network. They want to talk to the external Oracle database on the bare metal server next door, they need a static IP from the corporate range, or they need to be reachable via a specific VLAN.
Networking is a key part of the Virtualization story and we need to master it.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To achieve this, we need to understand two tools: &lt;strong&gt;NMState Operator&lt;/strong&gt; and &lt;strong&gt;Multus&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this guide, I will try to discuss some of the basics for &amp;#34;Multihomed Nodes&amp;#34; and look at the three main ways to connect your VMs to the outside world.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_who_does_what_nmstate_vs_multus"&gt;Who does what: NMState vs. Multus&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we look at YAML, we need to clear up a common confusion. Who does what?&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Think of your OpenShift Node as a house.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NMState (The Electrician)&lt;/strong&gt;: This operator works on the Node (Host) level. It drills holes in the walls, lays the physical cables, and installs the wall sockets. In technical terms: It configures Bridges, Bonds, and VLANs on the Linux OS of the worker nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multus (The Extension Cord)&lt;/strong&gt;: This CNI plugin works on the VM/Pod level. It plugs the VM into the socket that NMState created. It allows a VM to have more than one network interface (eth0, eth1, etc.).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You usually need both. First, NMState builds the bridge. Then, Multus connects the VM to it.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/nmstate_and_multus.png" alt="NMState and Multus in action"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_nmstate_operator_installation"&gt;NMState Operator Installation&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While Multus is installed by default, NMState is not. You need to install the Operator first.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To install the NMState Operator, you can use the Web UI or the CLI. For the Web UI, you can use the following steps:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;In the OpenShift Web Console, navigate to Operators → OperatorHub (or Ecosystem → Software Catalog in version 4.20+).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the OperatorHub, search for &lt;strong&gt;NMState&lt;/strong&gt; and select the Operator from the list.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on the &lt;strong&gt;NMState&lt;/strong&gt; Operator and click on the &lt;strong&gt;Install&lt;/strong&gt; button.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Install&lt;/strong&gt; button to install the Operator.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the Operator is installed, you can check the status of the Operator in the &lt;strong&gt;Installed Operators&lt;/strong&gt; view.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/nmState-Operator.png" alt="NMState Operator Installation"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Once done you will need to create an instance of the NMState Operator. Simply create the following resource:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: nmstate.io/v1
kind: NMState
metadata:
name: nmstate
spec:
probeConfiguration:
dns:
host: root-servers.net&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_nmstate_operator_resource_types"&gt;NMState Operator Resource Types&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The NMState Operator provides the three custom resource types to manage the node networking:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NodeNetworkState (NNS)&lt;/strong&gt; → The &amp;#34;As-Is&amp;#34; State:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;This is an automatically generated report of the current status. It effectively tells you: &amp;#34;On Server 1, I currently see Network Card A and B, and IP address X is configured.&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Purpose: To verify the current reality on the ground before you make changes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NodeNetworkConfigurationPolicy (NNCP)&lt;/strong&gt; → The &amp;#34;To-Be&amp;#34; State (The Blueprint):&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;This is the most critical resource. This resource is used to configure the network on the node. Here you tell the cluster that &amp;#34;I want a bridge named br1 to exist on all servers, and it must be connected to port ens4.&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The operator will try to configure this configuration across your nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NodeNetworkConfigurationEnactment (NNCE)&lt;/strong&gt; → The Result report:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;This resource is created autoamtically after the NNCP (the blueprint) has been created.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It resports back: This resource is used to enact the network configuration on the node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It reports back on the execution: &amp;#34;Success! The bridge has been built,&amp;#34; or &amp;#34;Failure! Cable not found.&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;With NMState in place, we can now start to configure the network on the node.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_part_1_building_the_bridge_nmstate"&gt;Part 1: Building the Bridge (NMState)&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;By default, Virtual Machines (VMs) are connected to the default pod network of the cluster as any other Pod. Using this network, the VM can communicate with other resources inside the cluster, or any resources that are reachable through the node network. Sometimes the VM needs a connection to a different network. In such a case you must connect the VM to an additional network. The VM will be configured with an additional network interface and is considered as multihomed VM.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To connect a VM to the physical network, we first need a bridge on the worker nodes. A bridge forwards packets between connected interfaces, similar to a network switch. We have two choices: Linux Bridge or OVS (Open vSwitch) Bridge.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Linux Bridge&lt;/strong&gt;: The sturdy, wooden bridge. Simple, robust, standard Linux kernel tech. Use this for 90% of your use cases (Static IPs, simple VLAN access).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OVS Bridge&lt;/strong&gt;: The magical, floating bridge. Complex, programmable, supports SDN logic. Use this only if you need advanced tunneling or integration with OVN policies on the physical interface.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s create a standard Linux Bridge called &lt;strong&gt;br1&lt;/strong&gt; on all our worker nodes, attached to physical interface &lt;strong&gt;ens4&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We do this by applying a NodeNetworkConfigurationPolicy (NNCP).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: &amp;#34;&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
desiredState:
interfaces:
- name: br1 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
description: Linux bridge linked to ens4 &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
type: linux-bridge &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
state: up &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
ipv4:
dhcp: true &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
enabled: true
bridge: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
options:
stp:
enabled: false
port: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
- name: ens4&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Apply to all worker nodes. If omitted, the policy will be applied to all nodes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the bridge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Description of the bridge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Type of the bridge (linux-bridge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;State of the bridge (up)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;DHCP is enabled for the bridge (true)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Options for the bridge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Port of the bridge (ens4)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Setting an interface to absent or deleting an NNCP resource does not restore the previous configuration. A cluster administrator must define a policy with the previous configuration to restore settings.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_part_2_defining_the_networkattachmentdefinition_nad"&gt;Part 2: Defining the NetworkAttachmentDefinition (NAD)&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now that the bridge br1 exists on the nodes, we need to tell OpenShift how to use it. We do this with a NetworkAttachmentDefinition (NAD).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;There are three distinct &amp;#34;Types&amp;#34; of networks you can define in a NAD.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_1_the_linux_bridge_cni_plugin"&gt;1. The Linux Bridge (CNI-Plugin)&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Linux bridges and OVS bridges both connect VMs to additional networks. A Linux bridge offers a simple, stable and sturdy solution and is well established. The OVS bridge on the other hand provides an advanced feature set for software-defined networks and is more challenging to troubleshoot.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Use Case: Direct Layer 2 connection to the physical network.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Behavior: The VM is practically &amp;#34;on the wire&amp;#34; of your datacenter. It can do DHCP with your corporate router.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: bridge-network
namespace: my-vm-namespace
spec:
config: &amp;#39;{
&amp;#34;cniVersion&amp;#34;: &amp;#34;0.3.1&amp;#34;,
&amp;#34;name&amp;#34;: &amp;#34;bridge-network&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;bridge&amp;#34;, &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
&amp;#34;bridge&amp;#34;: &amp;#34;br1&amp;#34;, &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
&amp;#34;ipam&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;dhcp&amp;#34;
}
}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Type of the network (bridge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the bridge (br1)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
type: &amp;#34;bridge&amp;#34; refers to the CNI plugin that utilizes the Linux Bridge we created earlier.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_a_note_on_ipam_ip_address_management"&gt;A note on IPAM (IP Address Management)&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You might have noticed the &amp;#34;ipam&amp;#34; section in the NAD examples. You have three main choices there:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;dhcp&lt;/strong&gt;: Passes DHCP requests to the physical network (requires an external DHCP server).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;static&lt;/strong&gt;: You manually define IPs in the VM config (hard work).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;whereabouts&lt;/strong&gt;: A &amp;#34;Cluster-wide DHCP&amp;#34; for private networks. Perfect for the OVN L2 Overlay scenario where no external DHCP exists.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_2_ovn_kubernetes_l2_overlay"&gt;2. OVN Kubernetes L2 Overlay&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A switched (layer 2) topology network interconnects workloads through a cluster-wide logical switch.
This configuration allows East-West traffic only (packets between Pods within a cluster).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Use Case: You need a private network between your VMs (East-West traffic only), but you don’t want them to talk to the physical network.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Behavior: It creates a Geneve tunnel (like VXLAN) over the network.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: private-overlay
namespace: my-vm-namespace
spec:
config: &amp;#39;{
&amp;#34;cniVersion&amp;#34;: &amp;#34;0.4.0&amp;#34;,
&amp;#34;name&amp;#34;: &amp;#34;private-overlay&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;ovn-k8s-cni-overlay&amp;#34;,
&amp;#34;topology&amp;#34;: &amp;#34;layer2&amp;#34;, &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
&amp;#34;netAttachDefName&amp;#34;: &amp;#34;my-vm-namespace/private-overlay&amp;#34; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Topology of the network (layer2)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;NetworkAttachmentDefinition name (my-vm-namespace/private-overlay)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A pod with the annotation &lt;code&gt;k8s.v1.cni.cncf.io/networks: private-overlay&lt;/code&gt; will be connected to the private-overlay network.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_3_ovn_kubernetes_secondary_localnet"&gt;3. OVN Kubernetes Secondary Localnet&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OVN-Kubernetes also supports a &lt;strong&gt;localnet&lt;/strong&gt; topology for secondary networks. This type creates a connection between a secondary network and a physical network, allowing VMs to communicate with destinations that are outside of the cluster.
You must map the secondary network to the OVS bridge on the node. This is done by using a NodeNetworkConfigurationPolicy resource.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Use Case: You want to connect to the physical network (like the Linux Bridge), but you want to leverage OVN features (like NetworkPolicies) on that traffic.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Behavior: Uses Open vSwitch mapping to connect the OVN logic to the physical port.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s create the bridge mapping by applying the following NodeNetworkConfigurationPolicy resource:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br0-ovs
spec:
nodeSelector:
node-role.kubernetes.io/worker: &amp;#34;&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
desiredState:
interfaces:
- name: br0-ovs &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
description: OVS bridge with ens4 as a port
type: ovs-bridge &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
state: up
ipv4:
dhcp: true
enabled: true
bridge:
options:
stp: true
port:
- name: ens4 &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
ovn:
bridge-mappings:
- localnet: br0-network &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
bridge: br0-ovs &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
state: present&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Apply to all worker nodes. If omitted, the policy will be applied to all nodes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the bridge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Type of the bridge (ovs-bridge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Port of the bridge (ens4)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Localnet network name (br0-network)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;OVS bridge name (br0-ovs)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now let’s create the NetworkAttachmentDefinition for the localnet network so that VMs can connect to it.
This NAD is created in the namespace of the VM.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If the NAD is created in the &lt;strong&gt;default&lt;/strong&gt; namespace, the configuration will be available to all namespaces.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: ovn-localnet
namespace: my-vm-namespace
spec:
config: &amp;#39;{
&amp;#34;cniVersion&amp;#34;: &amp;#34;0.4.0&amp;#34;,
&amp;#34;name&amp;#34;: &amp;#34;ovn-localnet&amp;#34;, &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
&amp;#34;type&amp;#34;: &amp;#34;ovn-k8s-cni-overlay&amp;#34;, &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
&amp;#34;topology&amp;#34;: &amp;#34;localnet&amp;#34;, &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
&amp;#34;netAttachDefName&amp;#34;: &amp;#34;my-vm-namespace/br0-network&amp;#34; &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the network (ovn-localnet)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Type of the network (ovn-k8s-cni-overlay)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Topology of the network (localnet)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Reference to the localnet mapping name defined in the NNCP (my-vm-namespace/br0-network)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_part_3_connecting_the_vm"&gt;Part 3: Connecting the VM&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Finally, we have our &amp;#34;socket&amp;#34; (the NAD). Let’s plug the VM in. In your VirtualMachine manifest, you add the network to the spec.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This can be done via the Web Console or via the CLI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Via the Web Console:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In the OpenShift Web Console, navigate to Virtualization → VirtualMachines.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on the VM you want to connect to the network.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &amp;#34;Configuration&amp;#34; tab&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &amp;#34;Network&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/VM-config.png?width=520px" alt="VM Configuration"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Click &amp;#34;Add network interface&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the Popup enter a name for the interface&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Model keep as &amp;#34;virtio&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the &amp;#34;Network&amp;#34; select the NAD you created in the previous step.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/VM-additional-nic.png?width=520px" alt="VM Configure additional network interface"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Click &amp;#34;Save&amp;#34;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The VM will now have a Pending state. This is because the VM is waiting for a migration to activate the new network interface.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/VM-Pending.png?width=640px" alt="VM Pending Configuration"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the action menu you can select &amp;#34;Migrate&amp;#34; &amp;gt; &amp;#34;Compute&amp;#34; to start the migration process. After a while the new NIC will be active.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/virtualization/images/network-overview/VM-migrated.png?width=640px" alt="VM Migrated"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Via the CLI:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Here is a VM connected to the Linux Bridge NAD we created in step 1:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: helloworld-vm
namespace: my-vm-namespace
spec:
template:
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- name: br0-network
bridge: {} &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
networks:
- name: default
pod: {}
- name: br0-network &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
multus:
networkName: br0-network &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The standard Pod Network&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Our new connection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The NAD name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The NAD name&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>GitOps Catalog</title><link>https://blog.stderr.at/whats-new/2025-12-23-gitops-catalog/</link><pubDate>Mon, 22 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/whats-new/2025-12-23-gitops-catalog/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;a href="https://blog.stderr.at/gitops-catalog/"&gt;GitOps Catalog&lt;/a&gt; page provides an interactive visualization of all available ArgoCD applications from the openshift-clusterconfig-gitops repository. Check out the page &lt;strong&gt;&lt;a href="https://blog.stderr.at/gitops-catalog/"&gt;GitOps Catalog&lt;/a&gt;&lt;/strong&gt; for more details.&lt;/p&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - Authentication Methods - Part 7</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-03-09-openbao-part-7-authentication-methods/</link><pubDate>Mon, 09 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-03-09-openbao-part-7-authentication-methods/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;With OpenBao deployed and running, the next critical step is configuring authentication. Ultimately, you want to limit access to only authorised people. This article covers two common authentication methods: Kubernetes for pods and LDAP for enterprise directories (in a simplified example). There are many more methods, but we cannot cover them all in this article.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Authentication in OpenBao verifies the identity of clients before granting access to secrets. OpenBao supports multiple authentication methods, each suited for different use cases. Among others it supports:&lt;/p&gt;
&lt;/div&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 20%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Method&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Best For&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Key Features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Kubernetes&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Pods running in K8s/OpenShift&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Service account tokens, automatic&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;OIDC&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Human users, SSO&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;OpenShift OAuth, Keycloak, Azure AD&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;LDAP&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Enterprise directories&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Active Directory, OpenLDAP&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;AppRole&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;CI/CD, automation&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Role ID + Secret ID&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Token&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Direct access, bootstrap&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Simple but less secure&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The full documentation of the authentication methods and the list of available methods (for example Radius) can be found here: &lt;a href="https://openbao.org/docs/auth/" target="_blank" rel="noopener"&gt;OpenBao Authentication Methods&lt;/a&gt;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_authentication_workflow"&gt;Authentication Workflow&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The authentication workflow in OpenBao from a user perspective is as follows:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;A client wants to authenticate to OpenBao and provides credentials.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao validates the credentials against the authentication method. For example: LDAP&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the authentication method (e.g. LDAP) is successful, it will return the required information about the client.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao maps this result to policies that are mapped to the authentication method.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao generates a token that is associated with the policies and returns it to the client.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The client can then use this token for further operations.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites_for_authentication_methods"&gt;Prerequisites for Authentication Methods&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before configuring authentication:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao is deployed and unsealed (Parts 2-6)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You have admin access to OpenBao (root token)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access to the required authentication backend:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For Kubernetes auth: Access to the OpenShift cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For LDAP: Access to the LDAP server&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Set up your environment:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can set up your environment by using either the &lt;strong&gt;CLI&lt;/strong&gt; or the &lt;strong&gt;UI&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_using_the_cli"&gt;Using the CLI&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can use the CLI to interact with OpenBao. In this example we will forward the port locally to the OpenBao server and set the environment variable &lt;code&gt;BAO_ADDR&lt;/code&gt; to the local address. With &lt;strong&gt;boa login&lt;/strong&gt; and the root token you can authenticate against OpenBao with full access.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Port forward (if needed)
oc port-forward svc/openbao 8200:8200 -n openbao &amp;amp;
# Set environment
export BAO_ADDR=&amp;#39;http://127.0.0.1:8200&amp;#39;
# You may need the SSL CA too.
# export BAO_CACERT=&amp;#34;$PWD/openbao-ca.crt&amp;#34;
# Login with root token
bao login
# List all authentication methods
bao auth list&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The last command returns a list of the available authentication methods. Currently there is only one authentication method enabled: &lt;strong&gt;token&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Path Type Accessor Description Version
---- ---- -------- ----------- -------
token/ token auth_token_1020fa7b token based credentials n/a&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_using_the_ui"&gt;Using the UI&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The UI, if enabled, is accessible via the OpenShift Route (if running on OpenShift and if it was created in &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt;). The password/token is the &lt;strong&gt;root token&lt;/strong&gt; from the initialisation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part7_openbao_ui_login_form.png" alt="OpenBao UI Login Form"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. OpenBao UI Login Form&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The UI has the advantage that it offers more visibility into the configuration and possibilities.
For example, at the beginning there is one authentication method &lt;strong&gt;token&lt;/strong&gt; enabled. This is required for the root token authentication.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part7_openbao_ui_authentication_methods.png?width=840px" alt="OpenBao UI Authentication Methods"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 2. OpenBao UI Authentication Methods&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We can enable the other authentication methods by clicking on the &lt;strong&gt;Enable new method&lt;/strong&gt; link to see what options are available.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part7_openbao_ui_configuring_new_authentication_methods.png" alt="OpenBao UI Configuring New Authentication Method"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 3. OpenBao UI Configuring New Authentication Method&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_understanding_policies"&gt;Understanding Policies&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before you can actually use an authentication method, it is important to understand what &lt;strong&gt;policies&lt;/strong&gt; do. A policy is a way to declaratively define what (which paths) authenticated users can access or not. All paths are &lt;strong&gt;denied by default&lt;/strong&gt;. A policy is mapped to an authentication method. For example, when we look at LDAP, we could configure something like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;em&gt;Every member of group &amp;#34;dev&amp;#34; is mapped to a policy named &amp;#34;dev-policy&amp;#34;. The policy then allows the user to read the secrets under the path &amp;#34;secret/data/dev/*&amp;#34;.&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_basic_policy_structure"&gt;Basic Policy Structure&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A policy is a set of &lt;strong&gt;path rules&lt;/strong&gt;. Each rule says: “for this path (or path prefix), the bearer of this policy may use these &lt;strong&gt;capabilities&lt;/strong&gt;.” Paths are tied to OpenBao’s internal API: for example, secrets in the KV v2 engine live under &lt;code&gt;secret/data/&amp;lt;mount-path&amp;gt;/&amp;lt;key&amp;gt;&lt;/code&gt;, so a path like &lt;code&gt;secret/data/myapp/*&lt;/code&gt; means “any key under the &lt;code&gt;myapp&lt;/code&gt; prefix in the KV v2 store mounted at &lt;code&gt;secret/&lt;/code&gt;.”&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The example below does two things that are typical for an application policy:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secrets access:&lt;/strong&gt; It allows &lt;strong&gt;read&lt;/strong&gt; and &lt;strong&gt;list&lt;/strong&gt; on &lt;code&gt;secret/data/myapp/*&lt;/code&gt;.&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;read&lt;/strong&gt; lets the client fetch the value of a secret at a path (e.g. &lt;code&gt;secret/data/myapp/database&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;list&lt;/strong&gt; is required to list keys under a path (e.g. to discover &lt;code&gt;myapp/database&lt;/code&gt;, &lt;code&gt;myapp/api-key&lt;/code&gt;); without it, the client would need to know every path in advance.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Token renewal:&lt;/strong&gt; It allows &lt;strong&gt;update&lt;/strong&gt; on &lt;code&gt;auth/token/renew-self&lt;/code&gt;. Tokens often have a limited lifetime; this path lets the holder extend their own token without needing the root token or extra permissions. Including it avoids applications losing access when the token expires.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Policies can be written in JSON or HCL.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-hcl hljs" data-lang="hcl"&gt;# Example policy: myapp-read-policy.hcl
# Allow reading secrets from specific path (KV v2 engine at mount &amp;#34;secret/&amp;#34;)
path &amp;#34;secret/data/myapp/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
# Allow renewing own token so the app can extend its token before expiry
path &amp;#34;auth/token/renew-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The path &lt;code&gt;secret/data/myapp/*&lt;/code&gt; assumes you have a KV v2 secrets engine mounted at &lt;code&gt;secret/&lt;/code&gt; and will store application secrets under keys like &lt;code&gt;myapp/database&lt;/code&gt;, &lt;code&gt;myapp/api-key&lt;/code&gt;, etc. Adjust the mount path and prefix to match your setup.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The use of globs (*) may result in surprising or unexpected behaviour. Use them with caution.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_policy_capabilities"&gt;Policy Capabilities&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Each path defined in a policy must have at least one capability. This provides control over the allowed or denied operations a user may perform. There are several capabilities available:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;create&lt;/code&gt; - Create new data at the given path&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;read&lt;/code&gt; - Read data&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;update&lt;/code&gt; - Update existing data&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;patch&lt;/code&gt; - Patch existing data (Partial update)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;delete&lt;/code&gt; - Delete data&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;list&lt;/code&gt; - List keys at a path&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;scan&lt;/code&gt; - Scan or browse the path for keys&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sudo&lt;/code&gt; - Allows access to paths that are &lt;strong&gt;root-protected&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;deny&lt;/code&gt; - Explicitly deny (overrides other grants)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_default_policies"&gt;Default Policies&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When OpenBao is deployed the first time, there are no additional authentication methods enabled, except for &lt;strong&gt;token&lt;/strong&gt;. This is required for the root access itself. To allow the administrator to access using the root token, there is one default policy called &lt;strong&gt;root&lt;/strong&gt;. This policy is somewhat of a catch-all policy; it allows the administrator to access everything.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In addition to the root policy, there is one other default policy called &lt;strong&gt;default&lt;/strong&gt;. This policy is used for all authenticated users. It allows them, for example, to look up their own properties and to renew or revoke their own token.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s look at the default policies in the CLI:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
A quick reminder of how to log in to OpenBao using the CLI when running it against Kubernetes: first open a port-forward, set the variable &lt;code&gt;BAO_ADDR&lt;/code&gt; to the local address; if you use HTTPS you may need to set the SSL CA certificate, then log in with the root token.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;You need to have the CA certificate in the current directory, which you might fetch with:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39; | base64 -d &amp;gt; openbao-ca.crt&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Port forward, set the environment variables and login with the root token:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Port forward (if needed)
oc port-forward svc/openbao 8200:8200 -n openbao &amp;amp;
# Set environment
export BAO_ADDR=&amp;#39;https://127.0.0.1:8200&amp;#39;
# You may need the SSL CA too.
export BAO_CACERT=&amp;#34;$PWD/openbao-ca.crt&amp;#34;
# Login with root token
bao login&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now to list and fetch the current policies you can use the following commands:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# List policies
bao policy list
# Returns something like this:
#default
#root
# Read a policy
bao policy read default&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The second command returns the policy content:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Allow tokens to look up their own properties
path &amp;#34;auth/token/lookup-self&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;]
}
# Allow tokens to renew themselves
path &amp;#34;auth/token/renew-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}
# Allow tokens to revoke themselves
path &amp;#34;auth/token/revoke-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}
# Allow a token to look up its own capabilities on a path
path &amp;#34;sys/capabilities-self&amp;#34; {
[...]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Again you will see the same policies in the UI. You can also manage them or create new ones using the UI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part7_openbao_ui_list_policies.png" alt="OpenBao UI List Policies"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 4. OpenBao UI Listing Policies&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_creating_policies_using_the_cli"&gt;Creating Policies using the CLI&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Creating a policy, for example using the CLI, is straightforward. You can create a policy from a file or inline. The syntax is either JSON or HCL.
The example below creates a policy called &amp;#34;myapp-read&amp;#34; from a file called &amp;#34;myapp-read-policy.hcl&amp;#34;. The policy allows reading and listing secrets under the path &amp;#34;secret/data/myapp/*&amp;#34;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Create a policy from file
bao policy write myapp-read myapp-read-policy.hcl
# Or inline
bao policy write myapp-read - &amp;lt;&amp;lt;EOF
path &amp;#34;secret/data/myapp/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
EOF
# List policies
bao policy list
# Read a policy
bao policy read myapp-read&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_kubernetes_authentication_method"&gt;Kubernetes Authentication Method&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As we work a lot with OpenShift/Kubernetes, this method of authentication will be the first to try.
The Kubernetes auth method can be used to authenticate against OpenBao using Kubernetes Service Account tokens.
It is essential for pods to access secrets.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let us try a simple example.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_enable_kubernetes_auth"&gt;Step 1: Enable Kubernetes Auth&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Assuming you are still logged into OpenBao, you can enable the kubernetes auth method with the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao auth enable --description=&amp;#34;Authentication method for Kubernetes/OpenShift cluster&amp;#34; kubernetes&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This simply enabled the authentication method and set a description. However, we need to configure it further.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_configure_kubernetes_auth"&gt;Step 2: Configure Kubernetes Auth&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To configure the kubernetes auth method, you need to get the Kubernetes API server address. You can get it with the following command (assuming you are logged into the OpenShift cluster):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The command &lt;code&gt;oc&lt;/code&gt; can be replaced by &lt;code&gt;kubectl&lt;/code&gt; if you are not using an OpenShift cluster.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Get the Kubernetes API server address
KUBERNETES_HOST=$(oc whoami --show-server) &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
# Configure the auth method
bao write auth/kubernetes/config \ &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
kubernetes_host=&amp;#34;$KUBERNETES_HOST&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Get the Kubernetes API server address. This will use something like &lt;a href="https://api.cluster.example.com:6443" class="bare"&gt;https://api.cluster.example.com:6443&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Configure the auth method with the Kubernetes API server address.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
When running inside Kubernetes, OpenBao automatically uses the pod’s service account to communicate with the Kubernetes API.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_create_a_policy_for_the_application"&gt;Step 3: Create a Policy for the Application&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we need to associate a policy to the authentication method. We could also assign the default policy but we want to provide access to the path &lt;strong&gt;secret/data/expense/database&lt;/strong&gt; and &lt;strong&gt;secret/metadata/expense/config&lt;/strong&gt; for the application.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Create policy for an example application
bao policy write expense-app - &amp;lt;&amp;lt;EOF
# Read database credentials
path &amp;#34;secret/data/expense/database&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;]
}
# Read application config
path &amp;#34;secret/data/expense/config&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
# Allow token renewal
path &amp;#34;auth/token/renew-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_4_create_a_kubernetes_auth_role"&gt;Step 4: Create a Kubernetes Auth Role&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As the next step we map the policy to the authentication method. We will further limit the access to the namespace &lt;strong&gt;expense&lt;/strong&gt; and the service account &lt;strong&gt;expense-app&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Create a role that maps Kubernetes service accounts to OpenBao policies
bao write auth/kubernetes/role/expense-app \
bound_service_account_names=expense-app \ &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
bound_service_account_namespaces=expense \ &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
policies=expense-app \ &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
ttl=1h \ &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
max_ttl=24h &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;bound_service_account_names&lt;/code&gt;: K8s service account name(s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;bound_service_account_namespaces&lt;/code&gt;: K8s namespace(s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;policies&lt;/code&gt;: OpenBao policies to attach&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ttl&lt;/code&gt;: Token time-to-live&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;max_ttl&lt;/code&gt;: Maximum token lifetime&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
All of the configuration above, created using the CLI, can also be done using the UI of OpenBao.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_ensure_the_kv_v2_secrets_engine_is_mounted_at_secret"&gt;Ensure the KV v2 secrets engine is mounted at &lt;code&gt;secret/&lt;/code&gt;&lt;/h4&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
We will discuss secret engines in more detail in the next part of the guide.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The policy and demo commands in this guide use the path &lt;code&gt;secret/data/expense/database&lt;/code&gt;, which assumes a KV v2 secrets engine mounted at &lt;code&gt;secret/&lt;/code&gt;. To check and enable it:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check existing mounts:&lt;/strong&gt; List enabled secrets engines and look for &lt;code&gt;secret/&lt;/code&gt; with type &lt;code&gt;kv&lt;/code&gt; (version 2):&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# List all secrets engine mounts (requires a token with read on sys/mounts)
bao secrets list&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Look for an entry like &lt;code&gt;secret/&lt;/code&gt; with type &lt;code&gt;kv&lt;/code&gt; or &lt;code&gt;kv-v2&lt;/code&gt;. If you see &lt;code&gt;secret/&lt;/code&gt; and it is KV v2, you can skip to Step 5.&lt;/p&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable KV v2 at &lt;code&gt;secret/&lt;/code&gt; if missing:&lt;/strong&gt; If &lt;code&gt;secret/&lt;/code&gt; is not listed, or you want to use a fresh mount, enable the KV v2 engine at the path &lt;code&gt;secret&lt;/code&gt;:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Enable KV v2 secrets engine at path &amp;#34;secret&amp;#34; (requires root or policy with sys/mounts capability)
bao secrets enable -path=secret -version=2 kv&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If &lt;code&gt;secret/&lt;/code&gt; already exists as KV v1, you cannot change it in place to v2; use a different path (e.g. &lt;code&gt;secretv2/&lt;/code&gt;) and update the policy and demo paths accordingly (&lt;code&gt;secretv2/data/expense/database&lt;/code&gt;, etc.).&lt;/p&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_5_store_the_database_secret_in_openbao_one_time"&gt;Step 5: Store the database secret in OpenBao (one-time)&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before the application can read it, store the database password (and any other keys) at the path the policy allows. Run this once from a machine that has OpenBao access (e.g. &lt;code&gt;bao&lt;/code&gt; CLI and a token).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock important"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-important" title="Important"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
You must use a token that has &lt;strong&gt;create&lt;/strong&gt; and &lt;strong&gt;update&lt;/strong&gt; capability on the KV path. The &lt;code&gt;expense-app&lt;/code&gt; policy is read-only (for the application). Use the &lt;strong&gt;root token&lt;/strong&gt; from OpenBao initialisation, or log in with &lt;code&gt;bao login&lt;/code&gt; and use a token that has a policy granting &lt;code&gt;create&lt;/code&gt; and &lt;code&gt;update&lt;/code&gt; on &lt;code&gt;secret/data/expense/*&lt;/code&gt;. If you use a read-only token (e.g. one that only has the expense-app policy), you will get a 403 &amp;#34;preflight capability check returned 403&amp;#34;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Store the database secret at secret/data/expense/database (KV v2 path)
bao kv put secret/expense/database password=&amp;#34;my-super-secret-db-password&amp;#34; username=&amp;#34;expense_db_user&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This creates (or overwrites) the secret at &lt;code&gt;secret/data/expense/database&lt;/code&gt;. The policy grants the &lt;code&gt;expense-app&lt;/code&gt; role &lt;strong&gt;read&lt;/strong&gt; on this path only; the application cannot create or update it.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_6_create_the_expense_namespace_and_demo_application"&gt;Step 6: Create the expense namespace and demo application&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Apply the following manifests to create the namespace, service account, optional long-lived token (Kubernetes 1.24+), and a demo deployment that authenticates to OpenBao and fetches the database secret. Adjust the OpenBao URL if your OpenBao is in another namespace or uses HTTPS (e.g. &lt;code&gt;&lt;a href="https://openbao.openbao.svc:8200" class="bare"&gt;https://openbao.openbao.svc:8200&lt;/a&gt;&lt;/code&gt;).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Remember: We have created a policy that allows the service account with the name &lt;strong&gt;expense-app&lt;/strong&gt; in the namespace &lt;strong&gt;expense&lt;/strong&gt; to read the secret at &lt;strong&gt;secret/data/expense/database&lt;/strong&gt;. If you want to use different paths or namespaces, you need to adjust the policy accordingly.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;# full-demo-expense-app.yaml
# 1. Namespace
apiVersion: v1
kind: Namespace &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
metadata:
name: expense
---
# 2. Service account used by the demo app to authenticate to OpenBao
apiVersion: v1
kind: ServiceAccount &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
metadata:
name: expense-app
namespace: expense
---
# 3. Long-lived token for Kubernetes 1.24+ (optional; allows pods to use the SA token)
apiVersion: v1
kind: Secret &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
metadata:
name: expense-app-token
namespace: expense
annotations:
kubernetes.io/service-account.name: expense-app
type: kubernetes.io/service-account-token
---
# 4. Demo deployment: authenticates to OpenBao and fetches secret/data/expense/database
apiVersion: apps/v1
kind: Deployment
metadata:
name: expense-demo
namespace: expense
labels:
app: expense-demo
spec:
replicas: 1
selector:
matchLabels:
app: expense-demo
template:
metadata:
labels:
app: expense-demo
spec:
serviceAccountName: expense-app
containers:
- name: demo
image: registry.access.redhat.com/ubi9/ubi-minimal:latest
command:
- /bin/sh
- -c
- |
echo &amp;#34;=== Installing curl ===&amp;#34;
microdnf install -y curl 2&amp;gt;/dev/null || true
echo &amp;#34;=== Fetching database secret from OpenBao ===&amp;#34;
JWT=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
OPENBAO_URL=&amp;#34;${OPENBAO_URL:-http://openbao.openbao:8200}&amp;#34;
RESP=$(curl -s -k -X POST -H &amp;#34;Content-Type: application/json&amp;#34; \ &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
-d &amp;#34;{\&amp;#34;jwt\&amp;#34;: \&amp;#34;$JWT\&amp;#34;, \&amp;#34;role\&amp;#34;: \&amp;#34;expense-app\&amp;#34;}&amp;#34; \
&amp;#34;$OPENBAO_URL/v1/auth/kubernetes/login&amp;#34;)
TOKEN=$(echo &amp;#34;$RESP&amp;#34; | sed -n &amp;#39;s/.*&amp;#34;client_token&amp;#34;:&amp;#34;\([^&amp;#34;]*\)&amp;#34;.*/\1/p&amp;#39;)
if [ -z &amp;#34;$TOKEN&amp;#34; ]; then
echo &amp;#34;ERROR: Failed to get OpenBao token. Check OpenBao URL ($OPENBAO_URL), role expense-app, and network.&amp;#34;
echo &amp;#34;Response: $RESP&amp;#34;
exit 1
fi
echo &amp;#34;OpenBao token obtained. Reading secret/data/expense/database ...&amp;#34;
SECRET=$(curl -s -k -H &amp;#34;X-Vault-Token: $TOKEN&amp;#34; &amp;#34;$OPENBAO_URL/v1/secret/data/expense/database&amp;#34;) &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
echo &amp;#34;$SECRET&amp;#34;
PASSWORD=$(echo &amp;#34;$SECRET&amp;#34; | sed -n &amp;#39;s/.*&amp;#34;password&amp;#34;:&amp;#34;\([^&amp;#34;]*\)&amp;#34;.*/\1/p&amp;#39;)
echo &amp;#34;Database password (data.data.password): ${PASSWORD:-&amp;lt;not set&amp;gt;}&amp;#34;
echo &amp;#34;=== Demo complete. Pod stays running; exec in to run more curl commands. ===&amp;#34;
exec sleep infinity
env:
# Override if OpenBao is in another namespace or uses HTTPS
- name: OPENBAO_URL
value: &amp;#34;https://openbao.openbao:8200&amp;#34;
resources:
requests:
memory: &amp;#34;64Mi&amp;#34;
cpu: &amp;#34;10m&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the service account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the secret (long-lived token for Kubernetes 1.24+)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Authenticate to OpenBao and get a client token (using the Kubernetes JWT)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Fetch the database secret&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The examples above use &lt;strong&gt;curl -k&lt;/strong&gt; to bypass certificate verification. For production, you should adjust this and mount the CA certificate in the pod (see the note earlier in this section on using the &lt;code&gt;openbao-ca&lt;/code&gt; Secret).
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Apply the manifests and wait for the pod to be ready:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc apply -f full-demo-expense-app.yaml
# Wait for the demo pod to be running
oc -n expense rollout status deployment/expense-demo
# View the log: you should see the OpenBao token response and the fetched secret (including the password)
oc -n expense logs -l app=expense-demo -f&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The log output shows the application authenticating with the Kubernetes JWT, receiving an OpenBao token, and reading the secret at &lt;code&gt;secret/data/expense/database&lt;/code&gt;. The field &lt;code&gt;data.data.password&lt;/code&gt; is the database password you stored in Step 5.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_7_test_kubernetes_authentication_manually"&gt;Step 7: Test Kubernetes authentication manually&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;From any pod that uses the same service account (e.g. the demo pod above), you can run the same steps by hand:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# From within a pod using the service account (e.g. oc exec -it deployment/expense-demo -n expense -- /bin/sh)
JWT=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# 1. Authenticate to OpenBao and get a client token (sed works on UBI minimal; use jq -r .auth.client_token if jq is available)
LOGIN=$(curl -s -k --request POST \
--data &amp;#34;{\&amp;#34;jwt\&amp;#34;: \&amp;#34;$JWT\&amp;#34;, \&amp;#34;role\&amp;#34;: \&amp;#34;expense-app\&amp;#34;}&amp;#34; \
https://openbao.openbao:8200/v1/auth/kubernetes/login)
TOKEN=$(echo &amp;#34;$LOGIN&amp;#34; | sed -n &amp;#39;s/.*&amp;#34;client_token&amp;#34;:&amp;#34;\([^&amp;#34;]*\)&amp;#34;.*/\1/p&amp;#39;)
# 2. Fetch the database secret (same path the policy allows)
curl -s -k -H &amp;#34;X-Vault-Token: $TOKEN&amp;#34; https://openbao.openbao:8200/v1/secret/data/expense/database&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_summary_kubernetes_auth_method"&gt;Summary: Kubernetes auth method&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A lot of steps were involved to configure the Kubernetes auth method. Let us summarize them:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;In OpenBao:&lt;/strong&gt; Enabled the Kubernetes auth method, configured it with the cluster API server address, created the &lt;code&gt;expense-app&lt;/code&gt; policy (read on &lt;code&gt;secret/data/expense/database&lt;/code&gt; and &lt;code&gt;secret/data/expense/config&lt;/code&gt;, update on &lt;code&gt;auth/token/renew-self&lt;/code&gt;), and created the &lt;code&gt;expense-app&lt;/code&gt; role that binds the Kubernetes service account &lt;code&gt;expense-app&lt;/code&gt; in namespace &lt;code&gt;expense&lt;/code&gt; to that policy.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secrets engine:&lt;/strong&gt; Ensured the KV v2 engine is mounted at &lt;code&gt;secret/&lt;/code&gt; and stored the database secret at &lt;code&gt;secret/expense/database&lt;/code&gt; (one-time, using the root token).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;In the cluster:&lt;/strong&gt; Created the &lt;code&gt;expense&lt;/code&gt; namespace, the &lt;code&gt;expense-app&lt;/code&gt; service account, an optional long-lived token Secret (for Kubernetes 1.24+), and the &lt;code&gt;expense-demo&lt;/code&gt; deployment that uses that service account to authenticate to OpenBao and fetch the secret. For HTTPS, created the &lt;code&gt;openbao-ca&lt;/code&gt; Secret in &lt;code&gt;expense&lt;/code&gt; with the OpenBao server CA.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verification:&lt;/strong&gt; Applied the manifests, confirmed the demo pod starts and logs show a successful login and the fetched secret (including the database password). Optionally ran the same login and read steps manually via &lt;code&gt;oc exec&lt;/code&gt; into the pod.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_ldap_authentication_method"&gt;LDAP Authentication Method&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;LDAP auth integrates with enterprise directories like Active Directory.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This is a very simplified example. As a prerequisite you need to have an LDAP server and a service account with the necessary permissions to read the users and groups. This is not part of this guide.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_enable_ldap_auth"&gt;Step 1: Enable LDAP Auth&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To enable LDAP authentication, you need to use the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao auth enable ldap&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_configure_ldap_active_directory_example"&gt;Step 2: Configure LDAP (Active Directory Example)&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this step you need to configure the LDAP server address, the user and group attributes, the bind DN and the bind password.
The example below configures the LDAP server address to &lt;strong&gt;ldaps://ad.example.com:636&lt;/strong&gt;, the user attribute to &lt;strong&gt;sAMAccountName&lt;/strong&gt;, the user DN to &lt;strong&gt;OU=Users,DC=example,DC=com&lt;/strong&gt;, the group DN to &lt;strong&gt;OU=Groups,DC=example,DC=com&lt;/strong&gt;, the group attribute to &lt;strong&gt;cn&lt;/strong&gt;, the group filter to &lt;strong&gt;(&amp;amp;(objectClass=group)(member:1.2.840.113556.1.4.1941:={{.UserDN}}))&lt;/strong&gt;, the bind DN to &lt;strong&gt;CN=openbao-svc,OU=ServiceAccounts,DC=example,DC=com&lt;/strong&gt; and the bind password to &lt;strong&gt;service-account-password&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao write auth/ldap/config \
url=&amp;#34;ldaps://ad.example.com:636&amp;#34; \
userattr=&amp;#34;sAMAccountName&amp;#34; \
userdn=&amp;#34;OU=Users,DC=example,DC=com&amp;#34; \
groupdn=&amp;#34;OU=Groups,DC=example,DC=com&amp;#34; \
groupattr=&amp;#34;cn&amp;#34; \
groupfilter=&amp;#34;(&amp;amp;(objectClass=group)(member:1.2.840.113556.1.4.1941:={{.UserDN}}))&amp;#34; \
binddn=&amp;#34;CN=openbao-svc,OU=ServiceAccounts,DC=example,DC=com&amp;#34; \
bindpass=&amp;#34;service-account-password&amp;#34; \
certificate=@/path/to/ldap-ca.pem \
insecure_tls=false \
starttls=false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_create_the_policies_used_by_the_group_mapping"&gt;Step 3: Create the policies used by the group mapping&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before mapping LDAP groups to policy names, create those policies. Below are example definitions: &lt;strong&gt;admin-policy&lt;/strong&gt; grants read/list/update on secrets (e.g. for operators), and &lt;strong&gt;user-policy&lt;/strong&gt; grants read-only access to a limited path.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;admin-policy&lt;/strong&gt; — for members of &lt;strong&gt;openbao-admins&lt;/strong&gt; (e.g. operators who may read and update secrets)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;user-policy&lt;/strong&gt; — for members of &lt;strong&gt;openbao-users&lt;/strong&gt; (e.g. developers with read-only access to a subset of secrets)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create them with the CLI (run before the group mapping in Step 4):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Create admin-policy (e.g. from file admin-policy.hcl)
bao policy write admin-policy - &amp;lt;&amp;lt;EOF
path &amp;#34;secret/data/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;, &amp;#34;update&amp;#34;, &amp;#34;create&amp;#34;, &amp;#34;delete&amp;#34;]
}
path &amp;#34;secret/metadata/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
path &amp;#34;auth/token/renew-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}
EOF
# Create user-policy
bao policy write user-policy - &amp;lt;&amp;lt;EOF
path &amp;#34;secret/data/myapp/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
path &amp;#34;auth/token/renew-self&amp;#34; {
capabilities = [&amp;#34;update&amp;#34;]
}
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Adjust paths (e.g. &lt;code&gt;secret/data/myapp/&lt;/code&gt;) to match your KV v2 mount and the paths you want each role to access. The &lt;strong&gt;default&lt;/strong&gt; policy is built-in and grants token lookup/renew/revoke-self. Attaching it in addition to &lt;strong&gt;admin-policy&lt;/strong&gt; or &lt;strong&gt;user-policy&lt;/strong&gt; is a typical pattern.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_4_map_ldap_groups_to_policies"&gt;Step 4: Map LDAP groups to policies&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now map the LDAP groups to the policy names. The example below maps the LDAP group &lt;strong&gt;openbao-admins&lt;/strong&gt; to the policies &lt;strong&gt;admin-policy&lt;/strong&gt; and &lt;strong&gt;default&lt;/strong&gt; and the LDAP group &lt;strong&gt;openbao-users&lt;/strong&gt; to the policies &lt;strong&gt;user-policy&lt;/strong&gt; and &lt;strong&gt;default&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Map an LDAP group to policies
bao write auth/ldap/groups/openbao-admins \
policies=&amp;#34;admin-policy,default&amp;#34;
bao write auth/ldap/groups/openbao-users \
policies=&amp;#34;user-policy,default&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_5_test_ldap_authentication"&gt;Step 5: Test LDAP authentication&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Assuming you have a user &lt;strong&gt;jdoe&lt;/strong&gt; in the LDAP group &lt;strong&gt;openbao-users&lt;/strong&gt;, you can test the LDAP authentication with the following commands:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Login with LDAP credentials
bao login -method=ldap username=jdoe
# Or via API
curl --request POST \
--data &amp;#39;{&amp;#34;password&amp;#34;: &amp;#34;user-password&amp;#34;}&amp;#39; \
$BAO_ADDR/v1/auth/ldap/login/jdoe&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_what_jdoe_can_and_cannot_do_user_policy"&gt;What jdoe can and cannot do (user-policy)&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;jdoe holds &lt;strong&gt;user-policy&lt;/strong&gt; (and &lt;strong&gt;default&lt;/strong&gt;), so their token has only the capabilities defined there.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;What jdoe can do:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Get a secret under &lt;code&gt;secret/data/myapp/*&lt;/code&gt;: after logging in, jdoe can read and list secrets in that path. For example:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# After bao login -method=ldap username=jdoe (and entering the password)
bao kv get secret/myapp/database
# Or via API (use the client_token from the login response)
curl -s -H &amp;#34;X-Vault-Token: $TOKEN&amp;#34; $BAO_ADDR/v1/secret/data/myapp/database&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;List keys under &lt;code&gt;secret/data/myapp/&lt;/code&gt; to discover available secrets.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Renew or revoke their own token (from the &lt;strong&gt;default&lt;/strong&gt; policy) and look up their own token properties.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;What jdoe cannot do:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read secrets outside &lt;code&gt;myapp&lt;/code&gt; — e.g. &lt;code&gt;secret/data/expense/database&lt;/code&gt; or &lt;code&gt;secret/data/other-app/*&lt;/code&gt; returns permission denied.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create, update, or delete any secret — &lt;strong&gt;user-policy&lt;/strong&gt; grants only &lt;code&gt;read&lt;/code&gt; and &lt;code&gt;list&lt;/code&gt; on &lt;code&gt;secret/data/myapp/*&lt;/code&gt;, so write operations are denied.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access admin or system paths — no access to &lt;code&gt;sys/&lt;strong&gt;&lt;/strong&gt;&lt;/code&gt;&lt;strong&gt;, other auth methods, or policies. Only the paths explicitly granted in *user-policy&lt;/strong&gt; and &lt;strong&gt;default&lt;/strong&gt; are allowed.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_common_practices_for_authentication"&gt;Common Practices for Authentication&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use the principle of least privilege&lt;/strong&gt; - Do not grant more permissions than necessary.&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-hcl hljs" data-lang="hcl"&gt;# Bad: Too broad
path &amp;#34;secret/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
# Good: Specific paths
path &amp;#34;secret/data/myapp/database&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;]
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Appropriate Token TTLs&lt;/strong&gt; - Set the appropriate TTLs for different use cases.&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Short-lived for automated systems
bao write auth/kubernetes/role/batch-job \
ttl=5m \
max_ttl=15m
# Longer for interactive users
bao write auth/oidc/role/human-user \
ttl=8h \
max_ttl=24h&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable Audit Logging&lt;/strong&gt; - Enable audit logging to track authentication attempts and other activities.&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Enable file audit
bao audit enable file file_path=/var/log/openbao/audit.log
# All authentication attempts are logged&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Regularly Rotate Credentials&lt;/strong&gt; - Rotate the credentials periodically to reduce the risk of compromise.&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Rotate AppRole Secret IDs periodically
bao write -f auth/approle/role/cicd-role/secret-id
# Revoke old tokens
bao token revoke &amp;lt;old-token&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_common_authentication_patterns"&gt;Common Authentication Patterns&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pattern 1: Application Pods&lt;/strong&gt;&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-none hljs"&gt;Pod → Service Account Token → Kubernetes Auth → Policy → Secret&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pattern 2: Human Users&lt;/strong&gt;&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-none hljs"&gt;User → LDAP/OIDC Login → Group Mapping → Policy → Secret&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pattern 3: CI/CD Pipeline&lt;/strong&gt;&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-none hljs"&gt;Pipeline → AppRole (Role ID + Secret ID) → Policy → Secret&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_coming_next"&gt;What is Coming Next?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In Part 8, we will explore secrets engines, like:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;KV (Key-Value) for static secrets&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In upcoming parts, I will try to cover common practices for OpenBao operation and maintenance. If there is time and an opportunity to test other authentication methods, I will do so.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Proper authentication configuration is crucial for OpenBao security. Key takeaways:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;strong&gt;Kubernetes auth&lt;/strong&gt; for pods&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;strong&gt;LDAP/OIDC&lt;/strong&gt; for human users (SSO)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;strong&gt;AppRole&lt;/strong&gt; for automation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;strong&gt;specific policies&lt;/strong&gt; with least privilege&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set &lt;strong&gt;appropriate TTLs&lt;/strong&gt; for different use cases&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable audit logging&lt;/strong&gt; for compliance&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/auth/kubernetes" target="_blank" rel="noopener"&gt;Kubernetes Auth Method&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/auth/jwt" target="_blank" rel="noopener"&gt;JWT/OIDC Auth Method&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/auth/ldap" target="_blank" rel="noopener"&gt;LDAP Auth Method&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/auth/approle" target="_blank" rel="noopener"&gt;AppRole Auth Method&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/concepts/policies" target="_blank" rel="noopener"&gt;Policies Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - Initialisation, Unsealing, and Auto-Unseal - Part 6</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-03-05-openbao-part-6-auto-unsealing/</link><pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-03-05-openbao-part-6-auto-unsealing/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;After deploying OpenBao via GitOps (&lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-20-openbao-part-5-gitops-deployment/"&gt;Part 5&lt;/a&gt;), OpenBao must be initialised and then unsealed before it becomes functional. You usually do not want to do this unsealing manually, since this is not scalable especially in bigger, productive environments. This article explains how to handle initialisation and unsealing, and possible options to configure an &lt;strong&gt;auto-unseal&lt;/strong&gt; process so that OpenBao unseals itself on every restart without manual key entry.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_happening"&gt;What is happening?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao starts in a &lt;strong&gt;sealed&lt;/strong&gt; state. You must:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Initialise (once): run &lt;code&gt;bao operator init&lt;/code&gt; to generate unseal keys (or recovery keys when using auto-unseal).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unseal (after each restart): provide a threshold of unseal keys so each node can decrypt the root key. In the previous articles, we used a threshold of 3 (out of 5). This means that you need to run &lt;strong&gt;bao operator unseal&lt;/strong&gt; three times with different keys on every node after every start before the service becomes available.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Unseal workflow (assuming a threshold of 3 and 5 nodes and initialisation already happened):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-mermaid {align=" center"="" zoom="true" }="" hljs"="" data-lang="mermaid {align=" center"="" zoom="true" }"=""&gt;---
title: &amp;#34;Unseal Workflow&amp;#34;
config:
theme: &amp;#39;dark&amp;#39;
---
flowchart LR
A[&amp;#34;openbao-0 starts → 3× unseal using different keys&amp;#34;]
A --&amp;gt; B[&amp;#34;openbao-1 starts → 3× unseal using different keys&amp;#34;]
B --&amp;gt; C[&amp;#34;openbao-2 starts → 3× unseal using different keys&amp;#34;]
C --&amp;gt; D[OpenBao is ready]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This is a chicken-egg problem: you need to start somewhere, but you cannot start without the keys.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Manual unsealing does not scale very well and for production, &lt;strong&gt;auto-unseal&lt;/strong&gt; is recommended.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This article covers:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Practical approaches to initialisation and unsealing (manual, Jobs, init containers).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Potential auto-unseal options: with AWS KMS, static key, and transit seal.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_handling_initialisation_and_unsealing"&gt;Handling Initialisation and Unsealing&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The challenge with GitOps is that initialisation and unsealing are one-off or stateful operations. Below are several approaches. The problem is always the same: where do you start? You must have the unseal keys available to start the process. Do you keep them locally, do you keep them in a Kubernetes Secret, do you use an external Key Management System (KMS)? The most robust for production in my opinion is auto-unseal using a KMS (see &lt;a href="#_approach_4_auto_unseal_with_kms_recommended_for_production"&gt;[_approach_4_auto_unseal_with_kms_recommended_for_production]&lt;/a&gt; and the following sections). It uses an external Key Management System (KMS) to store the keys. This way, the keys are not stored in the cluster and are not exposed to the risk of being compromised.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s have a look at the different options.&lt;/p&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_approach_1_manual_initialisation_simplest"&gt;Approach 1: Manual Initialisation (simplest)&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While manual initialisation and unsealing is the simplest approach, it is not recommended for production. Still, I would like to mention it here for completeness and at least provide a simple for loop to unseal the OpenBao service (on Kubernetes/OpenShift).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Manual unsealing is still a valid option when you have a small cluster and assume that it is not required often.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After Argo CD deploys OpenBao, the first pod, openbao-0, will not become ready until it is initialised and unsealed. Once done, the next pod, openbao-1, will try to start and wait until it is unsealed, and then the third pod, and so on.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following, simplified script will go through the pods and unseal them one by one. It initialises the first pod and stores the unseal keys in the file &lt;strong&gt;openbao-init.json&lt;/strong&gt; locally on your machine. From there it takes three different keys and unseals the OpenBao service.
Since it takes a while until other pods are pulling the image and starting, we will use a sleep timer of 30 seconds between each pod. This may or may not be enough depending on your network and cluster speed. As said, this is a very simplified script:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The CLI tool &lt;strong&gt;oc&lt;/strong&gt; can be replaced by &lt;strong&gt;kubectl&lt;/strong&gt; in case you are using a different cluster. In addition, you will need the jq command line tool to parse the JSON file.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If your OpenBao was initialised previously, you can remove the first three commands and start with the unseal commands.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Initialise the first pod, assuming the Pod openbao-0 is already running.
# You can skip this if OpenBao was initialised previously.
echo &amp;#34;Initialising OpenBao on openbao-0 (keys will be saved to openbao-init.json)...&amp;#34;
oc exec -it openbao-0 -n openbao -- bao operator init \
-key-shares=5 -key-threshold=3 -format=json &amp;gt; openbao-init.json &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
echo &amp;#34;Init complete. Unseal keys and root token are in openbao-init.json.&amp;#34;
# Unseal each pod (three times each with different keys)
echo &amp;#34;Unsealing pods openbao-0, openbao-1, openbao-2 (3 key steps each)...&amp;#34;
for i in 0 1 2; do
sleep_timer=30
SLEEPER_TMP=1
echo &amp;#34;Unsealing openbao-$i...&amp;#34;
for j in 1 2 3; do &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
KEY=$(cat openbao-init.json | jq -r &amp;#34;.unseal_keys_b64[$((j-1))]&amp;#34;)
oc exec -it openbao-$i -n openbao -- bao operator unseal $KEY
done
echo &amp;#34;openbao-$i unsealed. Waiting ${sleep_timer}s for next pod to be ready...&amp;#34;
while [[ $SLEEPER_TMP -le &amp;#34;$sleep_timer&amp;#34; ]]; do &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
if (( SLEEPER_TMP % 10 == 0 )); then
echo -n &amp;#34;$SLEEPER_TMP&amp;#34;
else
echo -n &amp;#34;.&amp;#34;
fi
sleep 1
SLEEPER_TMP=$((SLEEPER_TMP + 1))
done
echo &amp;#34;&amp;#34;
done
echo &amp;#34;All pods unsealed. Store init data securely (e.g. in a password manager).&amp;#34;
# Store init data securely (e.g. in a password manager)&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Initialise the first pod, assuming the Pod openbao-0 is already running. This will store the keys (including the root token) in the file &lt;strong&gt;openbao-init.json&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Unseal each pod (three times each with different keys). The keys are taken from the &lt;strong&gt;openbao-init.json&lt;/strong&gt; file.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Wait 30 seconds between each pod to give the other pods time to start and pull the image …​ super-fancy sleep timer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The unseal keys as well as the root token are stored in the &lt;strong&gt;openbao-init.json&lt;/strong&gt; file. This file is stored locally on your machine. Make sure to store it securely and do not share it with anyone.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_approach_2_kubernetes_job_for_initialisation"&gt;Approach 2: Kubernetes Job for initialisation&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A Job can run after OpenBao is deployed, initialise it if needed, and store the init output (unseal keys and root token) in a Kubernetes Secret. You can then unseal manually or with a second Job that reads from that Secret. This section assumes the init output is stored in a Kubernetes Secret and explains how to create that Secret from the Job and how to use it.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_how_the_secret_is_created"&gt;How the Secret is created&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The init Job runs &lt;code&gt;bao operator init&lt;/code&gt; and then creates the Secret using &lt;code&gt;oc&lt;/code&gt; (or &lt;code&gt;kubectl&lt;/code&gt;). The Secret name and key used here are &lt;code&gt;openbao-init-data&lt;/code&gt; and &lt;code&gt;init.json&lt;/code&gt; (the file contains the JSON output of &lt;code&gt;bao operator init&lt;/code&gt;, including &lt;code&gt;unseal_keys_b64&lt;/code&gt; and &lt;code&gt;root_token&lt;/code&gt;). One advantage of this approach is that you can integrate it into your GitOps pipeline. However, we can argue whether it is a good idea to store the unseal keys and the root token as a Secret in the same cluster where OpenBao is running.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Example structure of &lt;code&gt;init.json&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{
&amp;#34;unseal_keys_b64&amp;#34;: [
&amp;#34;key1&amp;#34;,
&amp;#34;key2&amp;#34;,
&amp;#34;key3&amp;#34;,
&amp;#34;key4&amp;#34;,
&amp;#34;key5&amp;#34;
],
&amp;#34;unseal_keys_hex&amp;#34;: [
&amp;#34;hex1&amp;#34;,
&amp;#34;hex2&amp;#34;,
&amp;#34;hex3&amp;#34;,
&amp;#34;hex4&amp;#34;,
&amp;#34;hex5&amp;#34;
],
&amp;#34;unseal_shares&amp;#34;: 5,
&amp;#34;unseal_threshold&amp;#34;: 3,
&amp;#34;recovery_keys_b64&amp;#34;: null,
&amp;#34;recovery_keys_hex&amp;#34;: null,
&amp;#34;recovery_keys_shares&amp;#34;: 0,
&amp;#34;recovery_keys_threshold&amp;#34;: 0,
&amp;#34;root_token&amp;#34;: &amp;#34;root.token&amp;#34;
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You need at least &lt;code&gt;unseal_threshold&lt;/code&gt; keys (e.g. 3) from &lt;code&gt;unseal_keys_b64&lt;/code&gt; to unseal each node. Store this file securely; anyone with access can unseal OpenBao and the &lt;code&gt;root_token&lt;/code&gt; has full admin access.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_rbac_configuration"&gt;RBAC Configuration&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Job’s service account (&lt;code&gt;openbao-init&lt;/code&gt;) must be allowed to create (and optionally get/update) Secrets in the &lt;code&gt;openbao&lt;/code&gt; namespace. Create a Role and RoleBinding:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: ServiceAccount
metadata:
name: openbao-init &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openbao
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: openbao-init-secret-writer
namespace: openbao
rules: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
- apiGroups: [&amp;#34;&amp;#34;]
resources: [&amp;#34;secrets&amp;#34;]
verbs: [&amp;#34;create&amp;#34;, &amp;#34;get&amp;#34;, &amp;#34;update&amp;#34;, &amp;#34;patch&amp;#34;]
- apiGroups: [&amp;#34;&amp;#34;] &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
resources: [&amp;#34;pods&amp;#34;]
verbs: [&amp;#34;get&amp;#34;, &amp;#34;list&amp;#34;]
- apiGroups: [&amp;#34;&amp;#34;] &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
resources: [&amp;#34;pods/exec&amp;#34;]
verbs: [&amp;#34;create&amp;#34;]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
metadata:
name: openbao-init-secret-writer
namespace: openbao
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: openbao-init-secret-writer
subjects:
- kind: ServiceAccount
name: openbao-init
namespace: openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount is used by the Job(s) to create the Secret and unseal the OpenBao pods.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Role is used by the Job to create the Secret. The rules might be extended to include the permission to execute into the pods to unseal the OpenBao service (see below).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Permissions to get and list the Pods, required for the unseal process later&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Permissions to execute into the Pods, required for the unseal process later&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The RoleBinding is used by the Job to create the Secret.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Job must also be able to reach the OpenBao API (e.g. &lt;code&gt;openbao.openbao.svc:8200&lt;/code&gt;). This is usually possible when running in the same namespace as the OpenBao service.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The example below shows a working Job manifest that will perform the initialisation and the creation of the Secret in the &lt;code&gt;openbao&lt;/code&gt; namespace. The manifest is already prepared for Argo CD with useful annotations.
It runs an init container to initialise the OpenBao service and a main container to create the Secret. The init container writes (using the OpenBao CLI) the init.json file to a shared volume and the main container (using the kubectl command) creates the Secret from that file.
In addition, the Secret containing the CA certificate is mounted as a volume and the BAO_CACERT environment variable is set to the path of the CA certificate.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_init_job_that_creates_the_secret"&gt;Init Job that creates the Secret&lt;/h4&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: batch/v1
kind: Job
metadata:
name: openbao-init
namespace: openbao
annotations: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
argocd.argoproj.io/sync-wave: &amp;#34;30&amp;#34;
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
template:
spec:
serviceAccountName: openbao-init &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
initContainers: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
- name: openbao-init
image: ghcr.io/openbao/openbao:latest
command: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
- /bin/sh
- -c
- |
if [ -f /etc/openbao-ca/ca.crt ]; then
export BAO_CACERT=/etc/openbao-ca/ca.crt
export BAO_ADDR=https://openbao:8200
echo &amp;#34;Using TLS CA from /etc/openbao-ca/ca.crt&amp;#34;
else
export BAO_ADDR=http://openbao:8200
echo &amp;#34;Using plain HTTP&amp;#34;
fi
until bao status 2&amp;gt;&amp;amp;1 | grep -q &amp;#34;Initialized&amp;#34;; do
echo &amp;#34;Waiting for OpenBao...&amp;#34;
sleep 5
done
if bao status | grep -q &amp;#34;Initialized.*false&amp;#34;; then
echo &amp;#34;Initialising OpenBao...&amp;#34;
bao operator init -key-shares=5 -key-threshold=3 \
-format=json &amp;gt; /shared/init.json
echo &amp;#34;Init complete; init.json written to shared volume.&amp;#34;
else
echo &amp;#34;OpenBao already initialised (skipping init).&amp;#34;
fi
volumeMounts:
- name: openbao-ca
mountPath: /etc/openbao-ca
readOnly: true
- name: shared
mountPath: /shared
containers: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
- name: create-secret
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
if [ -f /shared/init.json ]; then
echo &amp;#34;Creating Secret openbao-init-data from init.json...&amp;#34;
kubectl create secret generic openbao-init-data \
--from-file=init.json=/shared/init.json \
-n openbao --dry-run=client -o yaml | kubectl apply -f -
echo &amp;#34;Initialisation complete!&amp;#34;
else
echo &amp;#34;No init.json (OpenBao already initialised or init skipped).&amp;#34;
fi
volumeMounts:
- name: shared
mountPath: /shared
readOnly: true
volumes:
- name: openbao-ca
secret:
secretName: openbao-ca-secret
optional: true
- name: shared &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
emptyDir: {}
restartPolicy: Never
backoffLimit: 3&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The annotations are used by Argo CD to trigger the Job after the OpenBao deployment.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount is used by the Job to initialise the OpenBao service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The init container runs the OpenBao CLI (wait for API, then operator init) and writes init.json to a shared volume.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The command is used to initialise the OpenBao service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The main container runs kubectl to create the Secret from that file. This avoids needing a single image that bundles both &lt;code&gt;bao&lt;/code&gt; and &lt;code&gt;kubectl&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The shared volume is used to store the init.json file.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
As a result the secret &lt;code&gt;openbao-init-data&lt;/code&gt; is created in the &lt;code&gt;openbao&lt;/code&gt; namespace. It contains the unseal keys and the root token.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_using_the_secret_in_a_job_unseal"&gt;Using the Secret in a Job (unseal)&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A second Job can use the Secret &lt;code&gt;openbao-init-data&lt;/code&gt; that was created by the init Job to unseal each OpenBao pod. A lot is happening here:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;The init container extracts the unseal keys from the Secret using &lt;code&gt;jq&lt;/code&gt; and writes them to a shared volume.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The main container is reading the unseal keys from the shared volume and unseals the first pod&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then the process is waiting for 30 seconds, to give the next pod time to pull the image and start&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The process is repeated for the next pod and so on until all pods are unsealed.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First of all, the unseal Job needs permission to &lt;code&gt;exec&lt;/code&gt; into the OpenBao pods. Add a Role that grants the Job’s service account &lt;code&gt;create&lt;/code&gt; on &lt;code&gt;pods/exec&lt;/code&gt; in the &lt;code&gt;openbao&lt;/code&gt; namespace.
To make it easier, we can extend the Role &lt;code&gt;openbao-init-secret-writer&lt;/code&gt; that was created in the init Job.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
These rules were already applied with the configuration for the init Job, but we will repeat them here for completeness.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;rules: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- apiGroups: [&amp;#34;&amp;#34;]
resources: [&amp;#34;pods&amp;#34;]
verbs: [&amp;#34;get&amp;#34;, &amp;#34;list&amp;#34;]
- apiGroups: [&amp;#34;&amp;#34;]
resources: [&amp;#34;pods/exec&amp;#34;]
verbs: [&amp;#34;create&amp;#34;]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The role &lt;code&gt;openbao-init-secret-writer&lt;/code&gt; must be extended to include the permission to execute into the pods to unseal the OpenBao service.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: batch/v1
kind: Job
metadata:
name: openbao-unseal
namespace: openbao
annotations:
argocd.argoproj.io/sync-wave: &amp;#34;35&amp;#34;
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
template:
spec:
serviceAccountName: openbao-init &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
initContainers:
- name: extract-keys
image: quay.io/codefreshplugins/curl-jq
command:
- /bin/sh
- -c
- |
if [ ! -f /secrets/init.json ]; then
echo &amp;#34;Secret not found; run init Job first.&amp;#34;
exit 1
fi
for i in 0 1 2; do
jq -r &amp;#34;.unseal_keys_b64[$i]&amp;#34; /secrets/init.json | tr -d &amp;#39;\n&amp;#39; &amp;gt; &amp;#34;/shared/key$i&amp;#34; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
done
echo &amp;#34;Unseal keys extracted to shared volume.&amp;#34;
volumeMounts:
- name: init-secret
mountPath: /secrets
readOnly: true
- name: shared
mountPath: /shared
containers: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
- name: unseal
image: bitnami/kubectl:latest
command: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
- /bin/sh
- -c
- |
if [ -f /etc/openbao-ca/ca.crt ]; then
export BAO_CACERT=/etc/openbao-ca/ca.crt
export BAO_ADDR=https://openbao:8200
echo &amp;#34;Using TLS CA from /etc/openbao-ca/ca.crt&amp;#34;
else
export BAO_ADDR=http://openbao:8200
echo &amp;#34;Using plain HTTP&amp;#34;
fi
sleep_timer=30
if [ ! -f /shared/key0 ]; then
echo &amp;#34;Keys not found; extract-keys init container may have failed.&amp;#34;
exit 1
fi
echo &amp;#34;Unsealing pods openbao-0, openbao-1, openbao-2 (3 key steps each), ${sleep_timer}s delay between pods...&amp;#34;
for pod in openbao-0 openbao-1 openbao-2; do
echo &amp;#34;Unsealing $pod...&amp;#34;
for i in 0 1 2; do
key=$(cat &amp;#34;/shared/key$i&amp;#34; | tr -d &amp;#39;\n&amp;#39;)
kubectl exec -n openbao &amp;#34;$pod&amp;#34; -- sh -c &amp;#39;bao operator unseal &amp;#34;$1&amp;#34;&amp;#39; _ &amp;#34;$key&amp;#34; || true
done
echo &amp;#34;$pod unsealed.&amp;#34;
if [ &amp;#34;$pod&amp;#34; != &amp;#34;openbao-2&amp;#34; ]; then
echo &amp;#34;Waiting ${sleep_timer}s for next pod to be ready...&amp;#34;
SLEEPER_TMP=1
while [ &amp;#34;$SLEEPER_TMP&amp;#34; -le &amp;#34;$sleep_timer&amp;#34; ]; do
if [ $(( SLEEPER_TMP % 10 )) -eq 0 ]; then
echo -n &amp;#34;$SLEEPER_TMP&amp;#34;
else
echo -n &amp;#34;.&amp;#34;
fi
sleep 1
SLEEPER_TMP=$(( SLEEPER_TMP + 1 ))
done
echo &amp;#34;&amp;#34;
fi
done
echo &amp;#34;Unseal complete.&amp;#34;
volumeMounts:
- name: shared
mountPath: /shared
readOnly: true
- name: openbao-ca
mountPath: /etc/openbao-ca
readOnly: true
volumes: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
- name: init-secret
secret:
secretName: openbao-init-data
- name: shared
emptyDir: {}
- name: openbao-ca
secret:
secretName: openbao-ca-secret
optional: true
restartPolicy: Never
backoffLimit: 2&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount is used by the Job to unseal the OpenBao pods. Here we are using the same ServiceAccount as the one used to initialise the OpenBao service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The init container extracts unseal keys from init.json using jq and writes them to a shared volume.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The main container runs kubectl to exec into each OpenBao pod and run &lt;code&gt;bao operator unseal&lt;/code&gt; (the OpenBao image is used inside the target pods).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The command is used to unseal the OpenBao service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The volumes that are mounted: init-secret (contains the unseal keys), shared (contains the extracted unseal keys, shared between containers), openbao-ca (contains the CA certificate).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Both Jobs are using the same ServiceAccount and therefore the same RBAC rules. It is also possible to use a different ServiceAccount for the unseal Job.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Store init data securely. Keeping unseal keys in a cluster Secret is convenient but less secure than auto-unseal or an external secrets manager. Restrict access to the &lt;code&gt;openbao-init-data&lt;/code&gt; Secret (e.g. RBAC and network policies) and consider rotating or moving keys to a more secure store after bootstrap.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_approach_3_auto_unseal_with_key_management_service_kms_recommended_for_production"&gt;Approach 3: Auto-unseal with Key Management Service KMS (recommended for production)&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Configure a KMS (e.g. AWS KMS, Azure Key Vault, GCP Cloud KMS, Transit etc.) so that OpenBao unseals itself on every restart. You initialise &lt;strong&gt;once&lt;/strong&gt; and thereafter no manual unseal step is needed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The rest of this article will mention the different options. However, I can only demonstrate a real example using the AWS KMS option.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_auto_unseal_options_overview"&gt;Auto-unseal options overview&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Without auto-unseal, you must run &lt;code&gt;bao operator unseal&lt;/code&gt; with a threshold of keys after each restart. With auto-unseal, OpenBao uses a seal backend to protect the root key and unseals itself on startup.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Supported options:&lt;/p&gt;
&lt;/div&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 25%;"/&gt;
&lt;col style="width: 25%;"/&gt;
&lt;col style="width: 50%;"/&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Option&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Use case&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Notes&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;1. AWS KMS&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;AWS (EKS, OpenShift on AWS)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;IRSA or IAM user; no keys in cluster with IRSA.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;2. Azure Key Vault&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Azure (AKS, OpenShift on Azure)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Service principal or managed identity.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;3. Google Cloud KMS&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;GCP (GKE)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Service account key or Workload Identity.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;5. Transit&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Existing Vault/OpenBao as root of trust; or a &lt;strong&gt;dedicated seal-only OpenBao&lt;/strong&gt; that exists only to unseal production&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;External transit engine; token in a Secret.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_why_use_openbao_when_the_seal_is_in_kms_why_not_keep_all_secrets_in_kms_and_skip_openbao"&gt;Why use OpenBao when the seal is in KMS? Why not keep all secrets in KMS and skip OpenBao?&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When using a cloud KMS (or Key Vault) for auto-unseal, a natural question is: why not store and retrieve &lt;strong&gt;all&lt;/strong&gt; secrets from the KMS and run without OpenBao at all? OpenBao remains useful for operators and applications for several reasons:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;KMS is for keys, not a full secrets platform.&lt;/strong&gt; AWS KMS, Azure Key Vault keys, and GCP Cloud KMS are built to encrypt and decrypt small blobs (e.g. data encryption keys, or OpenBao’s seal blob). They are not a general-purpose secrets store with versioning, path-based access, dynamic credentials, and per-request audit. OpenBao provides that layer: applications request secrets by path, get short-lived tokens, and you get a single place to define who can read what.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic secrets.&lt;/strong&gt; OpenBao can generate short-lived credentials on demand (e.g. database users, cloud IAM roles, PKI certificates). The KMS does not create or rotate such credentials; it only holds keys. If you need “give this pod a DB password that expires in 1 hour,” that is OpenBao’s domain, not the KMS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fine-grained, identity-aware access.&lt;/strong&gt; OpenBao supports auth methods (Kubernetes, OIDC, AppRole, LDAP) and path-based policies. You can say “this service account may read only &lt;code&gt;secret/data/myapp/*&lt;/code&gt;.” The KMS has IAM or Key Vault RBAC, but not the same model of “one API, many identities, many paths” that applications and operators use day to day.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Audit and compliance.&lt;/strong&gt; OpenBao logs every secret read and write with identity and path. That gives a clear audit trail of who accessed which secret and when. KMS audit logs key usage, which is different from “which application read which secret at what time” in a unified way.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Abstraction and portability.&lt;/strong&gt; Applications talk to OpenBao’s API. You can move the seal or the storage backend (or even the cloud) without changing how applications request secrets. If you put everything directly in a cloud KMS, you tie every app to that vendor’s API and limits.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encryption as a service (Transit).&lt;/strong&gt; OpenBao’s Transit secrets engine lets applications encrypt data with a key without ever seeing the key—useful for application-level encryption and key rotation. KMS can do something similar, but OpenBao integrates that with the same auth and audit as the rest of your secrets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
In short: the KMS is the &lt;strong&gt;root of trust&lt;/strong&gt; for the seal (so OpenBao can unseal itself). OpenBao is the &lt;strong&gt;secrets management platform&lt;/strong&gt; that applications and operators use for storing, retrieving, generating, and auditing secrets. Both layers complement each other.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;General flow (same for all options):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Set up the seal backend (KMS key, Key Vault, static key, or transit server).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the appropriate &lt;code&gt;seal &amp;#34;…​&amp;#34;&lt;/code&gt; stanza to OpenBao configuration (e.g. in Helm &lt;code&gt;server.ha.raft.config&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy OpenBao; pods start and remain sealed until initialised.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run one-time initialisation: &lt;code&gt;bao operator init -recovery-shares=5 -recovery-threshold=3&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From then on, restarts are automatically unsealed.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_option_aws_kms"&gt;Option: AWS KMS&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Best for: EKS, OpenShift on AWS (ROSA), or any Kubernetes on AWS.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_create_a_kms_key"&gt;Create a KMS key&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create a &lt;strong&gt;symmetric&lt;/strong&gt; KMS key used only for the OpenBao seal (do not use for application data).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS WebUI:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;search for: KMS → Create key → Symmetric, Encrypt/Decrypt&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optionally set an alias (e.g. &lt;code&gt;openbao-unseal&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OPTIONAL: Define key administrative permissions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OPTIONAL: Define key usage permissions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit key policy&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS CLI:&lt;/strong&gt; (requires the AWS CLI to be installed and configured)&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;aws kms create-key \
--description &amp;#34;OpenBao auto-unseal key&amp;#34; \
--key-usage ENCRYPT_DECRYPT
aws kms create-alias \
--alias-name openbao-unseal \
--target-key-id &amp;lt;key-id-from-above&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Note the &lt;strong&gt;Key ID&lt;/strong&gt; or &lt;strong&gt;Alias&lt;/strong&gt; and the &lt;strong&gt;region&lt;/strong&gt; (e.g. &lt;code&gt;us-west-1&lt;/code&gt;) from the output of the command.&lt;/p&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_key_policy_for_openbao"&gt;Key policy for OpenBao&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The OpenBao process needs permission to use the key. Create an IAM policy. In the UI you can configure this directly when creating the key. When using the CLI, create a &lt;code&gt;policy.json&lt;/code&gt; file with the following content:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{
&amp;#34;Version&amp;#34;: &amp;#34;2012-10-17&amp;#34;,
&amp;#34;Statement&amp;#34;: [
{
&amp;#34;Effect&amp;#34;: &amp;#34;Allow&amp;#34;,
&amp;#34;Action&amp;#34;: [
&amp;#34;kms:Encrypt&amp;#34;,
&amp;#34;kms:Decrypt&amp;#34;,
&amp;#34;kms:DescribeKey&amp;#34;
],
&amp;#34;Resource&amp;#34;: &amp;#34;arn:aws:kms:&amp;lt;REGION&amp;gt;:&amp;lt;ACCOUNT_ID&amp;gt;:key/&amp;lt;KEY_ID&amp;gt;&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
}
]
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace &lt;code&gt;&amp;lt;REGION&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;ACCOUNT_ID&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;KEY_ID&amp;gt;&lt;/code&gt; with your values (or use the key ARN).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_create_an_iam_user_for_the_seal"&gt;Create an IAM user for the seal&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When using static credentials (e.g. on OpenShift or when IRSA is not available), create a dedicated IAM user that has permission only to use the KMS key above. Use that user’s access keys in the Kubernetes Secret.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS WebUI:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;search for: IAM → Users → Create user&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;User name: e.g. &lt;code&gt;openbao-seal&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DO NOT select &amp;#34;Provide user access to the AWS Management Console&amp;#34; (programmatic access only)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create user&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open the created user → Permissions → Add permissions → Create inline policy (or Attach policies → Create policy)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the policy editor, choose JSON and paste the policy from &lt;a href="#_key_policy_for_openbao"&gt;Key policy for OpenBao&lt;/a&gt;, replacing &lt;code&gt;&amp;lt;REGION&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;ACCOUNT_ID&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;KEY_ID&amp;gt;&lt;/code&gt; with your KMS key ARN or alias&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Name the policy (e.g. &lt;code&gt;OpenBaoSealKMS&lt;/code&gt;) and attach it to the user&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Security credentials → Create access key → Application running outside AWS (or &amp;#34;Third-party service&amp;#34;) → Create access key&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Save the Access key ID and Secret access key; you will put them in the Kubernetes Secret &lt;code&gt;openbao-aws-credentials&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS CLI:&lt;/strong&gt; (requires the AWS CLI to be installed and configured)&lt;/p&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create a policy file (e.g. &lt;code&gt;openbao-seal-policy.json&lt;/code&gt;) with the same JSON as in &lt;a href="#_key_policy_for_openbao"&gt;Key policy for OpenBao&lt;/a&gt;, then create the user, attach the policy, and create access keys:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Replace REGION, ACCOUNT_ID, and KEY_ID in the policy (or use alias: arn:aws:kms:REGION:ACCOUNT_ID:alias/openbao-unseal)
aws iam create-policy \
--policy-name OpenBaoSealKMS \
--policy-document file://openbao-seal-policy.json
aws iam create-user --user-name openbao-seal
aws iam attach-user-policy \
--user-name openbao-seal \
--policy-arn arn:aws:iam::ACCOUNT_ID:policy/OpenBaoSealKMS
aws iam create-access-key --user-name openbao-seal&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The last command returns &lt;code&gt;AccessKeyId&lt;/code&gt; and &lt;code&gt;SecretAccessKey&lt;/code&gt;; store them in the Secret &lt;code&gt;openbao-aws-credentials&lt;/code&gt; with key &lt;code&gt;AWS_REGION&lt;/code&gt; set to your region (e.g. &lt;code&gt;us-west-1&lt;/code&gt;).
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_create_the_secret"&gt;Create the Secret&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create the Secret &lt;code&gt;openbao-aws-credentials&lt;/code&gt; in the &lt;code&gt;openbao&lt;/code&gt; namespace. It contains the access key, secret access key, and region.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: Secret
metadata:
name: openbao-aws-credentials
namespace: openbao
type: Opaque
stringData: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
AWS_ACCESS_KEY_ID: &amp;lt;your-access-key&amp;gt;
AWS_SECRET_ACCESS_KEY: &amp;lt;your-secret-key&amp;gt;
AWS_REGION: &amp;lt;your-region&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace &lt;code&gt;&amp;lt;your-access-key&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;your-secret-key&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;your-region&amp;gt;&lt;/code&gt; with the values from the previous steps.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_helm_argo_cd_configuration_aws"&gt;Helm / Argo CD configuration (AWS)&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Add the &lt;code&gt;seal &amp;#34;awskms&amp;#34;&lt;/code&gt; block inside the same HCL config that contains &lt;code&gt;listener&lt;/code&gt;, &lt;code&gt;storage &amp;#34;raft&amp;#34;&lt;/code&gt;. In Helm values this is typically under &lt;code&gt;openbao.server.ha.raft.config&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure to update the region and key ID.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The values file we used for Argo CD can be found at: &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/blob/main/clusters/management-cluster/openbao/values.yaml" target="_blank" rel="noopener"&gt;OpenBao Argo CD values file&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;openbao:
server:
extraSecretEnvironmentVars: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- envName: AWS_ACCESS_KEY_ID
secretName: openbao-aws-credentials
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: openbao-aws-credentials
secretKey: AWS_SECRET_ACCESS_KEY
- envName: AWS_REGION
secretName: openbao-aws-credentials
secretKey: AWS_REGION
ha:
raft:
config: |
ui = true
seal &amp;#34;awskms&amp;#34; { &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
region = &amp;#34;us-east-1&amp;#34;
kms_key_id = &amp;#34;alias/openbao-unseal&amp;#34;
}
listener &amp;#34;tcp&amp;#34; {
tls_disable = 0
address = &amp;#34;[::]:8200&amp;#34;
cluster_address = &amp;#34;[::]:8201&amp;#34;
tls_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.crt&amp;#34;
tls_key_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.key&amp;#34;
tls_min_version = &amp;#34;tls12&amp;#34;
}
storage &amp;#34;raft&amp;#34; {
path = &amp;#34;/openbao/data&amp;#34;
}
service_registration &amp;#34;kubernetes&amp;#34; {}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;If you use static credentials (e.g. for OpenShift), inject them via a Secret.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;This must be added. Be sure to update your region and key ID.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock important"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-important" title="Important"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The &lt;code&gt;seal &amp;#34;awskms&amp;#34;&lt;/code&gt; block alone is not enough. OpenBao must have AWS credentials available at runtime. If you see an error such as &lt;code&gt;NoCredentialProviders: no valid providers in chain&lt;/code&gt; or &lt;code&gt;error fetching AWS KMS wrapping key information&lt;/code&gt;, the pod has no credentials. You must either inject static credentials from a Secret (see below) or use IRSA so the pod can assume an IAM role.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_approach_4_init_container_with_external_secret_store"&gt;Approach 4: Init container with external secret store&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Another option is to use an init container that retrieves secrets from an external source. Two use cases can be considered here:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Shamir unseal:&lt;/strong&gt; The init container fetches the unseal keys from an external secret manager and then runs &lt;code&gt;bao operator unseal&lt;/code&gt; (or writes keys to a file used by a sidecar/unseal script). This is the classic “external secret store” idea (e.g. using AWS Secrets Manager, HashiCorp Vault, another Kubernetes cluster).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Transit seal token:&lt;/strong&gt; The init container fetches the &lt;strong&gt;token&lt;/strong&gt; that production OpenBao uses to call the dedicated unseal service (see &lt;a href="#_dedicated_unseal_service_seal_only_openbao"&gt;Dedicated unseal service (seal-only OpenBao)&lt;/a&gt;). That token is written to a file or shared volume so the main OpenBao process can use it in &lt;code&gt;seal &amp;#34;transit&amp;#34;&lt;/code&gt;. The token never lives in a Kubernetes Secret in Git or in plain YAML; it is fetched at pod start from the external store.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Below is a generic placeholder for the init container.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_option_init_container_example"&gt;Option: init Container example&lt;/h4&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The following options are generic and more theoretical. Unlike the other options, I could not test them in a real environment yet. This may be covered in a future article.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Generic placeholder (implement according to your secret store):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;# Additional values for the Helm chart
server:
extraInitContainers:
- name: fetch-secrets
image: registry.access.redhat.com/ubi9/ubi-minimal:latest
command:
- /bin/sh
- -c
- |
# Fetch unseal keys OR transit token from external secret manager
echo &amp;#34;Waiting for OpenBao to be ready...&amp;#34;
sleep 30
# Your fetch logic here (e.g. aws secretsmanager get-secret-value, vault kv get, curl to API) &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
env:
- name: EXTERNAL_SECRET_ENDPOINT &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
value: &amp;#34;https://your-external-secret-store.example.com&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace with your fetch logic.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace with your external secret store endpoint.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_option_transit_seal"&gt;Option: Transit seal&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When you already have a Vault or OpenBao cluster and want it to act as the root of trust (e.g. central Vault encrypts the seal key), you can use this option.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To prepare everything, you need to do the following:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Enable the &lt;strong&gt;Transit&lt;/strong&gt; secrets engine on the external Vault/OpenBao and create a key (e.g. &lt;code&gt;openbao-unseal&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Grant the token used by this OpenBao permission to encrypt/decrypt with that key.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Store the token in a Kubernetes Secret and inject via &lt;code&gt;extraSecretEnvironmentVars&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Do not put the token in the config file.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following HCL configuration is used to configure the OpenBao cluster to use the transit seal.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-hcl hljs" data-lang="hcl"&gt;seal &amp;#34;transit&amp;#34; {
address = &amp;#34;https://external-vault.example.com:8200&amp;#34;
token = &amp;#34;s.xxx&amp;#34;
disable_renewal = &amp;#34;false&amp;#34;
key_name = &amp;#34;openbao-unseal&amp;#34;
mount_path = &amp;#34;transit/&amp;#34;
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;See the &lt;a href="https://openbao.org/docs/configuration/seal/transit/" target="_blank" rel="noopener"&gt;OpenBao transit seal&lt;/a&gt; documentation for further details.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect4"&gt;
&lt;h5 id="_dedicated_unseal_service_seal_only_openbao"&gt;Dedicated unseal service (seal-only OpenBao)&lt;/h5&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A common and recommended pattern is to run a &lt;strong&gt;separate, standalone OpenBao instance&lt;/strong&gt; whose &lt;strong&gt;only&lt;/strong&gt; role is to provide the seal for your production OpenBao. In other words: the external OpenBao holds the unseal keys—via its Transit secrets engine—and the production OpenBao cluster uses the transit seal to auto-unseal by calling that external instance. No application secrets or other workloads run on the dedicated instance; it exists solely to unseal the production OpenBao.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Why use a dedicated unseal service?&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Separation of concerns:&lt;/strong&gt; The root of trust for unsealing lives outside the production cluster. If the production OpenBao is compromised or rebuilt, the seal keys remain in the dedicated instance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simpler operations:&lt;/strong&gt; You unseal and manage only one small, locked-down OpenBao (the seal service). The production OpenBao unseals itself automatically via transit.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No cloud KMS required:&lt;/strong&gt; On-premise or air-gapped environments can use this pattern instead of AWS KMS, Azure Key Vault, or GCP Cloud KMS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clear trust boundary:&lt;/strong&gt; The dedicated instance can be hardened, network-isolated, and backed up independently.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Architecture (high level):&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dedicated unseal service:&lt;/strong&gt; A standalone OpenBao (single node is often enough) with the Transit secrets engine enabled and a dedicated transit key (e.g. &lt;code&gt;production-openbao-unseal&lt;/code&gt;). This instance is initialised and unsealed once; you keep its unseal keys and root token very secure. It does not store application secrets.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Production OpenBao:&lt;/strong&gt; Your HA OpenBao cluster (e.g. in Kubernetes) is configured with &lt;code&gt;seal &amp;#34;transit&amp;#34;&lt;/code&gt; pointing at the dedicated service URL and a token that has permission only to use that transit key. On startup, each production node asks the dedicated OpenBao to decrypt its seal blob and thus unseals automatically.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_migration_from_shamir_to_auto_unseal"&gt;Migration from Shamir to auto-unseal&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You might wonder if you can migrate from Shamir to auto-unseal. The answer is yes, you can.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If OpenBao was previously initialised with &lt;strong&gt;Shamir&lt;/strong&gt; unseal keys and you want to switch to any auto-unseal backend, you can do the following:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Plan a short maintenance window. Raft HA will tolerate one node at a time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the appropriate &lt;code&gt;seal &amp;#34;…​&amp;#34;&lt;/code&gt; block to the config and deploy (e.g. via Argo CD).&lt;/p&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Do NOT re-run init.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unseal the &lt;strong&gt;leader&lt;/strong&gt; with the existing Shamir unseal keys.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run &lt;strong&gt;seal migration&lt;/strong&gt; on the leader:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao operator unseal -migrate&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the leader. It should auto-unseal via the new seal.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update and restart standby nodes. They should auto-unseal as well.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/concepts/seal/" target="_blank" rel="noopener"&gt;OpenBao seal concepts&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/platform/k8s/" target="_blank" rel="noopener"&gt;OpenBao on Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration/seal/awskms/" target="_blank" rel="noopener"&gt;AWS KMS seal&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration/seal/azurekeyvault/" target="_blank" rel="noopener"&gt;Azure Key Vault seal&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration/seal/gcpckms/" target="_blank" rel="noopener"&gt;Google Cloud KMS seal&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration/seal/static/" target="_blank" rel="noopener"&gt;Static seal&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration/seal/transit/" target="_blank" rel="noopener"&gt;Transit seal&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - GitOps Deployment with Argo CD - Part 5</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-20-openbao-part-5-gitops-deployment/</link><pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-20-openbao-part-5-gitops-deployment/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;Following the GitOps mantra &amp;#34;If it is not in Git, it does not exist&amp;#34;, this article demonstrates how to deploy and manage OpenBao using Argo CD. This approach provides version control, audit trails, and declarative management for your secret management infrastructure.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Deploying OpenBao via GitOps offers significant advantages:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;: All configuration changes are tracked in Git&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Audit Trail&lt;/strong&gt;: Who changed what, when, and why&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Declarative&lt;/strong&gt;: Desired state is defined, not imperative commands&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reproducible&lt;/strong&gt;: Same deployment process across environments&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Self-healing&lt;/strong&gt;: Argo CD ensures actual state matches desired state&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;However, there are challenges specific to secret management:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Initial unsealing requires manual intervention or automationxw&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Root tokens and unseal keys must be handled carefully&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chicken-and-egg problem: How to store OpenBao secrets before OpenBao exists?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This article will focus on the deployment of the applicaiotns for the OpenBao deployment. The problem with the automatic unsealing will be addressed in the next article.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before you begin, ensure you have:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenShift GitOps (Argo CD) installed and configured&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Git repository for your cluster configuration&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Understanding of the App-of-Apps pattern (see &lt;a href="https://blog.stderr.at/gitopscollection/2024-04-02-configure_app_of_apps/"&gt;Configure App-of-Apps&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The official OpenBao Helm chart from &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_the_wrapper_helm_chart"&gt;The (Wrapper) Helm Chart&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The official OpenBao Helm Chart works well for deploying OpenBao itself. However, additional settings such as certificates are not covered there. Therefore, I created a (wrapper) Helm Chart that includes the official OpenBao Helm Chart, a &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/cert-manager" target="_blank" rel="noopener"&gt;Cert-Manager Helm Chart&lt;/a&gt; that I created a while ago, and allows you to add additional objects in the templates folder.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;I will follow the same setup as discussed in &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls"&gt;Part 4&lt;/a&gt;. This means:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Create Issuer for Cert-Manager&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create CA Certificate for OpenBao&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create Certificate for OpenBao Server&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create Certificate for OpenBao Agent Injector&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install OpenBao Helm Chart&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;However, there is &lt;strong&gt;one significant issue&lt;/strong&gt; with this approach: we need the certificate details (especially the CA certificate) before we can install the OpenBao Helm Chart, since the certificate must be referenced in the values file. This is a chicken-and-egg problem—particularly if you use self-signed CA certificates.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Since you typically create the CA certificate first and only once (or have it already), a separate Application for the CA certificate is a good approach. Once this is synchronised, the OpenBao Application can be updated and deployed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This means we will need two Helm Charts: one for the CA certificate and one for OpenBao itself.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_helm_chart_for_the_ca_certificate"&gt;Helm Chart for the CA Certificate&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create a new Helm Chart for the CA certificate. This will use the &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/cert-manager" target="_blank" rel="noopener"&gt;Cert-Manager Helm Chart&lt;/a&gt; as a dependency and will create the:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Issuer for Cert-Manager&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Request the CA certificate from the Issuer&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the &lt;strong&gt;openbao&lt;/strong&gt; namespace&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The example I am using can be found at &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/tree/main/clusters/management-cluster/openbao-ca-certificate" target="_blank" rel="noopener"&gt;GitOps openbao-ca-certificate&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Please check the &lt;a href="https://blog.stderr.at/gitopscollection/2023-12-28-gitops-repostructure/" target="_blank" rel="noopener"&gt;GitOps Repository Structure&lt;/a&gt; and subsequent articles for more information on how to structure your Git repository.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the &lt;strong&gt;Chart.yaml&lt;/strong&gt; file you can see two dependencies:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;dependencies:
- name: tpl
version: ~1.0.0
repository: https://charts.stderr.at/
- name: cert-manager
version: ~2.0.3 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
repository: https://charts.stderr.at/
condition: cert-manager.enabled&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Cert-Manager Helm Chart version must be at least 2.0.3 to support the namespace parameter for the Issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;tpl&lt;/strong&gt; dependency is a library that contains templates for the namespace and other shared components (I keep this library in my &lt;a href="https://blog.stderr.at/helm-charts/" target="_blank" rel="noopener"&gt;Helm Charts&lt;/a&gt; repository). The &lt;strong&gt;cert-manager&lt;/strong&gt; dependency is the Cert-Manager Helm Chart.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;values.yaml&lt;/strong&gt; file below contains the configuration for the Cert-Manager:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
All settings that are passed to a subchart (cert-manager) must be prefixed with the name of the subchart. In this case, &lt;code&gt;cert-manager.&lt;/code&gt;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;namespace:
create: true &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
name: &amp;#34;openbao&amp;#34;
description: &amp;#34;OpenBao Namespace&amp;#34;
displayName: &amp;#34;OpenBao Namespace&amp;#34;
additionalLabels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
cert-manager: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
enabled: true &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
issuer:
# Name of issuer
- name: openbao-selfsigned &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
# -- Syncwave to create this issuer
syncwave: 5 &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
# -- Type can be either ClusterIssuer or Issuer
type: Issuer
# -- Enable this issuer.
# @default -- false
enabled: true
# -- Namespace for Issuer (ignored for ClusterIssuer). Defaults to the default namespace when not set.
namespace: openbao &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
# -- Create a selfSigned issuer. The SelfSigned issuer doesn&amp;#39;t represent a certificate authority as such, but instead denotes that certificates will &amp;#34;sign themselves&amp;#34; using a given private key.
selfSigned: true &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
certificates: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
enabled: true
# List of certificates
certificate:
- name: openbao-ca
enabled: true
namespace: openbao
syncwave: &amp;#34;10&amp;#34;
secretName: openbao-ca-secret
duration: 87660h
dnsNames:
- openbao-ca
privateKey:
algorithm: ECDSA
size: 256
rotationPolicy: Always
isCA: true
# Reference to the issuer that shall be used.
issuerRef:
name: openbao-selfsigned
kind: Issuer&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the &lt;strong&gt;openbao&lt;/strong&gt; namespace with various settings.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Enable the Cert-Manager subchart. All settings below will be passed to the Cert-Manager subchart.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Enables Cert-Manager.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Syncwave for this issuer; it must be lower than the syncwave of the certificate.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace for the issuer; supported since version 2.0.3 of the Cert-Manager Helm Chart.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Creates a self-signed issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The certificate section and all the settings. Verify the &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls/#_step_1_certificate_authority_ca_for_openbao"&gt;Part 4&lt;/a&gt; for more information.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This chart creates the &lt;strong&gt;openbao&lt;/strong&gt; namespace, which is required for the OpenBao deployment.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_helm_chart_for_the_openbao_deployment"&gt;Helm Chart for the OpenBao Deployment&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create a new Helm Chart for the OpenBao deployment. This will use the official &lt;a href="https://github.com/openbao/openbao-helm" target="_blank" rel="noopener"&gt;OpenBao Helm chart&lt;/a&gt; and the &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/cert-manager" target="_blank" rel="noopener"&gt;Cert-Manager Helm Chart&lt;/a&gt; as dependencies and will create the:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao deployment (including Route)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Issuer for the OpenBao server using the CA certificate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Required OpenBao certificates based on the CA certificate&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;I have added the full values.yaml file below. Please check the &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/blob/main/clusters/management-cluster/openbao/values.yaml" target="_blank" rel="noopener"&gt;GitOps openbao/values.yaml&lt;/a&gt; to fetch the full chart.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Besides the two certificates added here in the values.yaml file, I made two further changes: fullnameOverride and nameOverride are both set to &lt;strong&gt;openbao&lt;/strong&gt;. This is good practice for setting the name of the deployment when using Argo CD.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This values file contains the CA certificate in plain text. Whether that is acceptable is debatable—it is a public certificate, and here it is self-signed and used only in my demo environment.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;# Full values.yaml file for the OpenBao deployment
########################################################
# Cert-Manager
########################################################
cert-manager:
enabled: true
issuer:
# Name of issuer
- name: openbao-ca-issuer &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
# -- Syncwave to create this issuer
syncwave: &amp;#39;-1&amp;#39;
# -- Type can be either ClusterIssuer or Issuer
type: Issuer
# -- Enable this issuer.
# @default -- false
enabled: true
# -- Namespace for Issuer (ignored for ClusterIssuer). Defaults to the default namespace when not set.
namespace: openbao
ca:
secretName: openbao-ca-secret
certificates:
enabled: true
# List of certificates
certificate:
########################################################
# OpenBao Server TLS Certificate
########################################################
- name: openbao-server-tls &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
enabled: true
namespace: openbao
syncwave: &amp;#34;0&amp;#34;
secretName: openbao-server-tls
duration: 8760h
renewBefore: 720h
dnsNames:
- openbao.openbao.svc
- openbao.apps.ocp.aws.ispworld.at # Route host (adjust to your domain)
- openbao
- openbao.openbao
- openbao.openbao.svc
- openbao.openbao.svc.cluster.local
- openbao-internal
- openbao-internal.openbao
- openbao-internal.openbao.svc
- openbao-internal.openbao.svc.cluster.local
- openbao-0.openbao-internal
- openbao-0.openbao-internal.openbao
- openbao-0.openbao-internal.openbao.svc
- openbao-0.openbao-internal.openbao.svc.cluster.local
- openbao-1.openbao-internal
- openbao-1.openbao-internal.openbao
- openbao-1.openbao-internal.openbao.svc
- openbao-2.openbao-internal
- openbao-2.openbao-internal.openbao
- openbao-2.openbao-internal.openbao.svc
ipAddresses:
- 127.0.0.1
- &amp;#34;::1&amp;#34;
# Reference to the issuer that shall be used.
issuerRef:
name: openbao-ca-issuer
kind: Issuer
########################################################
# Injector TLS Certificate
########################################################
- name: injector-certificate &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
enabled: true
namespace: openbao
syncwave: &amp;#34;0&amp;#34;
secretName: injector-tls
duration: 24h
renewBefore: 144m
dnsNames:
- openbao-agent-injector-svc
- openbao-agent-injector-svc.openbao
- openbao-agent-injector-svc.openbao.svc
- openbao-agent-injector-svc.openbao.svc.cluster.local
# Reference to the issuer that shall be used.
issuerRef:
name: openbao-ca-issuer
kind: Issuer
########################################################
# OpenBao Deployment
########################################################
openbao: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
# Override the full name of the deployment via Argo CD
fullnameOverride: openbao
nameOverride: openbao
global:
# Enable OpenShift-specific settings
openshift: true
# -- The namespace to deploy to. Defaults to the `helm` installation namespace.
namespace: openbao
# Required when TLS is enabled: tells the chart to use HTTPS for readiness/liveness
# probes and for in-pod API_ADDR (127.0.0.1:8200). Otherwise you get &amp;#34;client sent
# an HTTP request to an HTTPS server&amp;#34; from the probes.
tlsDisable: false
server:
extraEnvironmentVars:
BAO_CACERT: /openbao/tls/openbao-server-tls/ca.crt
# High Availability configuration
ha:
enabled: true
replicas: 3
# Raft storage configuration
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener &amp;#34;tcp&amp;#34; {
tls_disable = 0
address = &amp;#34;[::]:8200&amp;#34;
cluster_address = &amp;#34;[::]:8201&amp;#34;
tls_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.crt&amp;#34;
tls_key_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.key&amp;#34;
tls_min_version = &amp;#34;tls12&amp;#34;
telemetry {
unauthenticated_metrics_access = &amp;#34;true&amp;#34;
}
}
storage &amp;#34;raft&amp;#34; {
path = &amp;#34;/openbao/data&amp;#34;
retry_join {
leader_api_addr = &amp;#34;https://openbao-0.openbao-internal:8200&amp;#34;
leader_tls_servername = &amp;#34;openbao-0.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34;
}
retry_join {
leader_api_addr = &amp;#34;https://openbao-1.openbao-internal:8200&amp;#34;
leader_tls_servername = &amp;#34;openbao-1.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34;
}
retry_join {
leader_api_addr = &amp;#34;https://openbao-2.openbao-internal:8200&amp;#34;
leader_tls_servername = &amp;#34;openbao-2.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34;
}
}
service_registration &amp;#34;kubernetes&amp;#34; {}
telemetry {
prometheus_retention_time = &amp;#34;30s&amp;#34;
disable_hostname = true
}
route:
enabled: true
host: openbao.apps.ocp.aws.ispworld.at
tls:
# Route terminates client TLS; backend can use reencrypt or passthrough
termination: reencrypt
insecureEdgeTerminationPolicy: Redirect
destinationCACertificate: |
-----BEGIN CERTIFICATE-----
MIIBWzCCAQCgAwIBAgIQNdbg4KIu9oi6dDClE8drmjAKBggqhkjOPQQDAjAAMB4X
DTI2MDIxODA5NDIxNFoXDTM2MDIxODIxNDIxNFowADBZMBMGByqGSM49AgEGCCqG
SM49AwEHA0IABIeYw35/kEHvyctLtOA5xMlyQNxUtXtfBbZMUfPh6AN5MFjIGuNS
cn07a3EpSpfY6/3DaPpu+4wYNFlc+/qDNYajXDBaMA4GA1UdDwEB/wQEAwICpDAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQgyRk5Vv9K0VULcCgga5mRg4O9kTAY
BgNVHREBAf8EDjAMggpvcGVuYmFvLWNhMAoGCCqGSM49BAMCA0kAMEYCIQCwQ7lZ
Q0jzUjJFzpTGkQjU2+OB159LIQMSbSQ7dz8nVQIhAIDa7f87tjQxDxbJio+/vJx2
awFaWnueGOOQpvwCcV/+
-----END CERTIFICATE-----
extraVolumes:
- type: secret
name: openbao-server-tls
path: /openbao/tls
readOnly: true
extraVolumeMounts:
- name: openbao-server-tls
mountPath: /openbao/tls
readOnly: true
# Resource requests and limits
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 1000m
# Persistent volume for data
dataStorage:
enabled: true
size: 10Gi
# storageClass: &amp;#34;gp3-csi&amp;#34;
# Injector configuration
injector:
enabled: true
replicas: 2 # HA for the injector too
certs:
secretName: injector-tls
# For a private CA: set caBundle to the CA cert (PEM) so the Kubernetes API server trusts the injector webhook. E.g. oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39; | base64 -d
caBundle: &amp;#34;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJXekNDQVFDZ0F3SUJBZ0lRTmRiZzRLSXU5b2k2ZERDbEU4ZHJtakFLQmdncWhrak9QUVFEQWpBQU1CNFgKRFRJMk1ESXhPREE1TkRJeE5Gb1hEVE0yTURJeE9ESXhOREl4TkZvd0FEQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxRwpTTTQ5QXdFSEEwSUFCSWVZdzM1L2tFSHZ5Y3RMdE9BNXhNbHlRTnhVdFh0ZkJiWk1VZlBoNkFONU1GaklHdU5TCmNuMDdhM0VwU3BmWTYvM0RhUHB1KzR3WU5GbGMrL3FETllhalhEQmFNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVAKQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlFneVJrNVZ2OUswVlVMY0NnZ2E1bVJnNE85a1RBWQpCZ05WSFJFQkFmOEVEakFNZ2dwdmNHVnVZbUZ2TFdOaE1Bb0dDQ3FHU000OUJBTUNBMGtBTUVZQ0lRQ3dRN2xaClEwanpVakpGenBUR2tRalUyK09CMTU5TElRTVNiU1E3ZHo4blZRSWhBSURhN2Y4N3RqUXhEeGJKaW8rL3ZKeDIKYXdGYVdudWVHT09RcHZ3Q2NWLysKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=&amp;#34;
certName: tls.crt
keyName: tls.key
# UI configuration
ui:
enabled: true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Issuer for OpenBao. The syncwave is set to &amp;#39;-1&amp;#39; to create the issuer before the certificate request and the OpenBao deployment.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Certificate for the OpenBao server, with all DNS names and IP addresses used to reach it. The syncwave is set to &amp;#39;0&amp;#39; so the certificate is created after the issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Certificate for the OpenBao Agent Injector, with all DNS names used to reach it. The syncwave is set to &amp;#39;0&amp;#39; so the certificate is created after the issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;OpenBao deployment configuration; the two overrides fullnameOverride and nameOverride are both set to &lt;strong&gt;openbao&lt;/strong&gt;. See &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls/#_step_5_helm_values_for_server_tls"&gt;Part 4&lt;/a&gt; for more information.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_creating_argo_cd_applications"&gt;Creating Argo CD Applications&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Whilst the Helm Charts are ready, we need to create the Argo CD Applications for the CA certificate and the OpenBao deployment.
If you read our blog carefully, you will notice that these Application resources are created automatically when I add something to the folder clusters/management-cluster :)
This is done by leveraging ApplicationSet resources and is fully described in the &lt;a href="https://blog.stderr.at/gitopscollection/2023-12-28-gitops-repostructure/" target="_blank" rel="noopener"&gt;GitOps Repository Structure&lt;/a&gt; and subsequent articles.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For the sake of simplicity, I will show the created Application resources below. The setup is the same for both; they simply target a different path in the Git repository.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;# Full Application resource for the OpenBao CA Certificate
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: in-cluster-openbao-ca-certificate
namespace: openshift-gitops
spec:
destination:
name: in-cluster &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: default &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
info:
- name: Description
value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
project: in-cluster &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
source:
path: clusters/management-cluster/openbao-ca-certificate &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
targetRevision: main
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5
---
# Full Application resource for the OpenBao deployment
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: in-cluster-openbao
namespace: openshift-gitops
spec:
destination:
name: in-cluster &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openbao &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
info:
- name: Description
value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
project: in-cluster &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
source:
path: clusters/management-cluster/openbao &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
targetRevision: main
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Target cluster; here, the local cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the target cluster; here, the OpenBao deployment will be installed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Argo CD Project (must exist)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the Git repository for the CA certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;URL of the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the OpenBao deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the following Applications:
- in-cluster-openbao-ca-certificate
- in-cluster-openbao&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The ApplicationSet adds the prefix &lt;strong&gt;in-cluster-&lt;/strong&gt; to each Application name so that they remain unique in Argo CD.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part5_openbao_argocd.png" alt="OpenBao Argo CD Applications"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. Argo CD: OpenBao Argo CD Applications&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The first Application to synchronise is &lt;strong&gt;in-cluster-openbao-ca-certificate&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;It creates the following:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Namespace&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CA Issuer&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CA Certificate&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Then synchronise &lt;strong&gt;in-cluster-openbao&lt;/strong&gt;. It creates:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao deployment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao Agent Injector&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao Server TLS Certificate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao Agent Injector TLS Certificate&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_openbao_is_runningwhat_next"&gt;OpenBao is running—what next?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Deploying OpenBao via GitOps gives you version control and declarative management for your secret management infrastructure.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Whilst OpenBao is running and managed by Argo CD, the next step is to configure it: you will need to handle initialisation (for a new cluster) and unsealing (see &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt; and &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls/"&gt;Part 4&lt;/a&gt;). That manual approach does not scale. In the next article, I will discuss ways to automate initialisation and unsealing.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Key takeaways:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use sync waves to control deployment order&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Consider auto-unseal for production (Part 6)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Store initialisation data securely outside Git&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://blog.stderr.at/gitopscollection/2024-04-02-configure_app_of_apps/" target="_blank" rel="noopener"&gt;Configure App-of-Apps&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/helm/" target="_blank" rel="noopener"&gt;Argo CD Helm Support&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/" target="_blank" rel="noopener"&gt;Argo CD Sync Waves&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - Enabling TLS on OpenShift - Part 4</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls/</link><pubDate>Tue, 17 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-17-openbao-part-4-enabling-tls/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;In &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt; we deployed OpenBao on OpenShift in HA mode with TLS disabled: the OpenShift Route terminates TLS at the edge, and traffic from the Route to the pods is plain HTTP. While this is ok for quick tests, for a production-ready deployment, you should consider TLS for the entire journey. This article explains why and how to enable TLS end-to-end using the cert-manager operator, what to consider, and the exact steps to achieve it.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Part 3 uses &lt;code&gt;tls_disable = 1&lt;/code&gt; in the OpenBao listener and relies on the OpenShift Route for TLS. That gives encryption between the client and the Route, but &lt;strong&gt;not&lt;/strong&gt; between the Route and the OpenBao pods or between pods (e.g. Raft). Enabling TLS on OpenBao itself adds encryption in transit everywhere and aligns with defense-in-depth and compliance requirements. After all, we are talking about a secrets management system here and should not compromise security.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This part assumes:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao is already deployed as in &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt; (HA with Raft, Agent Injector enabled).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;strong&gt;cert-manager operator&lt;/strong&gt; is installed and configured on your OpenShift cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The article &lt;a href="https://your-blog/openshift-platform/security/certificates/ssl-certificate-management/"&gt;SSL Certificate Management for OpenShift&lt;/a&gt; describes the setup and usage of the cert-manager operator as an example.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_why_enable_tls"&gt;Why Enable TLS?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Enabling TLS for OpenBao makes sense for several reasons:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encryption in transit&lt;/strong&gt;: Traffic between the Route and the pods, and between OpenBao peers (Raft), is encrypted. Secrets and tokens are never sent in plain text on the cluster network.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Defense in depth&lt;/strong&gt;: Even if the Route or network is misconfigured, backend traffic remains protected.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Compliance&lt;/strong&gt;: Many standards (e.g. PCI-DSS, SOC 2) require encryption in transit for sensitive data; TLS to the application (OpenBao) helps satisfy this.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agent Injector&lt;/strong&gt;: The injector webhook is called by the Kubernetes API server. Using TLS for the webhook (with a valid certificate) is required for production and when running multiple replicas.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: Using HTTPS everywhere simplifies client configuration and avoids mixing HTTP/HTTPS in the same environment.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_must_be_considered"&gt;What Must Be Considered?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Two TLS contexts&lt;/strong&gt;: Each needs its own certificate and configuration.&lt;/p&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;OpenBao server (API and Raft)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Agent Injector (mutating webhook)&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Certificate SANs&lt;/strong&gt;: Server certificates must include &lt;strong&gt;all names&lt;/strong&gt; used to reach OpenBao: Route host, internal service names (e.g. openbao.openbao.svc, openbao-0.openbao-internal.openbao.svc), 127.0.0.1 and ::1 for in-pod traffic, and any external DNS you use. The injector certificate must match the injector Service DNS name (e.g. openbao-agent-injector-svc.openbao.svc).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;cert-manager&lt;/strong&gt;: Using cert-manager gives automatic issuance and renewal. You need a ClusterIssuer (or an Issuer) in the OpenBao namespace.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
In this example, we will use a self-signed CA for the OpenBao server and the Agent Injector. In a production environment, you should use a trusted CA.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenShift Route&lt;/strong&gt;: With backend TLS enabled, you can keep the Route in reencrypt mode: the Route terminates TLS from the client and opens a new TLS connection to the pod. Alternatively, use passthrough if you want end-to-end TLS without re-encryption at the Route.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Raft join&lt;/strong&gt;: After switching to TLS, retry_join and cluster addresses must use https:// and the correct hostnames. Existing unseal keys and root token are unchanged; only the transport is different.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clients&lt;/strong&gt;: CLI and applications must use https:// for BAO_ADDR and, if you use a private CA, BAO_CACERT (or the system trust store) so that the client trusts the server certificate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao HA deployment from &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/"&gt;Part 3&lt;/a&gt; (namespace &lt;code&gt;openbao&lt;/code&gt;, Helm release &lt;code&gt;openbao&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;cert-manager operator installed on OpenShift.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sufficient rights to create Issuers, Certificates, and Secrets in the &lt;code&gt;openbao&lt;/code&gt; namespace.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_overview_of_steps"&gt;Overview of Steps&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;strong&gt;Certificate Authority (CA)&lt;/strong&gt; in the &lt;code&gt;openbao&lt;/code&gt; namespace (or use an existing ClusterIssuer).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Issue a &lt;strong&gt;Certificate for the OpenBao server&lt;/strong&gt; (API + Raft) and store it in a Secret.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Issue a &lt;strong&gt;Certificate for the Agent Injector&lt;/strong&gt; and reference it in the Helm values.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update &lt;strong&gt;Helm values&lt;/strong&gt; to mount the server cert and CA, and configure the listener and Raft for TLS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Upgrade&lt;/strong&gt; the Helm release; initialize and unseal openbao-0 (new cluster) or re-unseal (existing).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verify&lt;/strong&gt; access via HTTPS and configure clients (&lt;code&gt;BAO_ADDR&lt;/code&gt;, &lt;code&gt;BAO_CACERT&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_1_certificate_authority_ca_for_openbao"&gt;Step 1: Certificate Authority (CA) for OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you already have a ClusterIssuer (e.g. Let’s Encrypt or an enterprise CA), you can use it for the server and injector certificates and skip this step. For a self-signed CA in the OpenBao namespace (typical for internal cluster TLS), create the following.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This self-signed CA is only for testing purposes. In a production environment, you should use a trusted CA, preferably your own.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Create a self-signed CA Issuer in the openbao namespace:&lt;/p&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
You might have your own CA or a ClusterIssuer already. You can use them, instead of creating a new one.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: openbao-selfsigned
namespace: openbao &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
selfSigned: {} &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the OpenBao deployment is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The self-signed CA Issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a self-signed CA Certificate in the openbao namespace:&lt;/p&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will now actually request the CA certificate from the self-signed CA Issuer. This process is fully automated by cert-manager because everything is self-signed. The requested certificate will be stored in the secret openbao-ca-secret and is available immediately.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openbao-ca
namespace: openbao &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
isCA: true &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
commonName: OpenBao CA
secretName: openbao-ca-secret &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
duration: 87660h &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
privateKey: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
algorithm: ECDSA
size: 256 &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
rotationPolicy: Always &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
issuerRef:
name: openbao-selfsigned &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
kind: Issuer &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
group: cert-manager.io&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the OpenBao deployment is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The certificate is a CA certificate.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the secret where the certificate and key will be stored.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The duration of the certificate. In this case 10 years.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The private key algorithm and size.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The rotation policy of the private key.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The issuer of the certificate.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The kind of the issuer. Can be Issuer or ClusterIssuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_2_create_an_issuer_for_the_openbao_server_and_agent_injector"&gt;Step 2: Create an Issuer for the OpenBao Server and Agent Injector&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create an Issuer for the OpenBao Server and Agent Injector. This Issuer will reference the CA certificate and key stored in the secret openbao-ca-secret.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: openbao-ca-issuer
namespace: openbao
spec:
ca:
secretName: openbao-ca-secret&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The name of the secret where the CA certificate and key are stored.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_3_certificate_for_the_openbao_server"&gt;Step 3: Certificate for the OpenBao Server&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now it is time to create the certificate for the OpenBao server. The server certificate must include &lt;strong&gt;every hostname&lt;/strong&gt; used to reach OpenBao: the Route host, the headless service, and each Raft member. Adjust the dnsNames and optional uris to match your cluster and Route.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create the following Certificate object:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openbao-server-tls
namespace: openbao
spec:
secretName: openbao-server-tls &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
duration: 8760h &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
renewBefore: 720h &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
commonName: openbao.openbao.svc &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
dnsNames:
- openbao.apps.cluster.example.com # Route host (adjust to your domain) &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
- openbao &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
- openbao.openbao
- openbao.openbao.svc
- openbao.openbao.svc.cluster.local
- openbao-internal
- openbao-internal.openbao
- openbao-internal.openbao.svc
- openbao-internal.openbao.svc.cluster.local
- openbao-0.openbao-internal
- openbao-0.openbao-internal.openbao
- openbao-0.openbao-internal.openbao.svc
- openbao-0.openbao-internal.openbao.svc.cluster.local
- openbao-1.openbao-internal
- openbao-1.openbao-internal.openbao
- openbao-1.openbao-internal.openbao.svc
- openbao-2.openbao-internal
- openbao-2.openbao-internal.openbao
- openbao-2.openbao-internal.openbao.svc
ipAddresses: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
- 127.0.0.1
- &amp;#34;::1&amp;#34;
issuerRef:
name: openbao-ca-issuer &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
kind: Issuer
group: cert-manager.io&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the secret where the certificate and key will be stored.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The duration of the certificate. In this case 1 year.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The duration before the certificate is renewed. In this case 30 days.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The common name of the certificate. This is the service name of the OpenBao server.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Route host. Adjust to your domain.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The headless service names, all should be included.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Required for in-pod traffic: Readiness/liveness probes and local bao commands (e.g. raft join, operator unseal) connect to 127.0.0.1:8200. Without these IP SANs you get &amp;#34;tls: bad certificate&amp;#34; or &amp;#34;x509: cannot validate certificate for 127.0.0.1 because it doesn’t contain any IP SANs&amp;#34;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The issuer of the certificate, this time openbao-ca-issuer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After a few moments the certificate will be ready and the secret will be created. The cert-manager will store the signed certificate and key in the Secret openbao-server-tls with keys tls.crt and tls.key. The Helm chart can mount this secret for the OpenBao listener.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_4_certificate_for_the_agent_injector"&gt;Step 4: Certificate for the Agent Injector&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Agent Injector runs as a webhook; the Kubernetes API server calls it over TLS. The certificate must match the Service DNS name of the injector. Create a Certificate that references the same CA Issuer. In this example we use a short-lived certificate valid for 24 hours, renewed when 10% of the validity period remains.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Save as openbao-injector-cert.yaml:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: injector-certificate
namespace: openbao
spec:
secretName: injector-tls &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
duration: 24h &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
renewBefore: 144m
commonName: Agent Inject Cert &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
dnsNames:
- openbao-agent-injector-svc &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
- openbao-agent-injector-svc.openbao
- openbao-agent-injector-svc.openbao.svc
- openbao-agent-injector-svc.openbao.svc.cluster.local
issuerRef:
name: openbao-ca-issuer
kind: Issuer
group: cert-manager.io&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the secret where the certificate and key will be stored.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The duration of the certificate. In this case 24 hours. (renewal is done 10% before expiry)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The injector service name.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The injector service name is defined by the Helm chart. If you override the injector service name, adjust dnsNames accordingly.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_5_helm_values_for_server_tls"&gt;Step 5: Helm Values for Server TLS&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;With all the certificates created, we can update the Helm values so that OpenBao uses the server certificate and listens with TLS. You need to:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Mount the Secret openbao-server-tls into the OpenBao pods.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the listener to use tls_cert_file and tls_key_file and disable tls_disable.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Switch Raft retry_join and cluster addresses to https://.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the environment variable BAO_CACERT to the OpenBao pods.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we start, we need to export the CA certificate from the secret openbao-ca-secret and save its value:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39; | base64 -d&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will return the certificate like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Save this, you will need it in the next step.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we need to create the Helm values file. This time we will enable TLS. Refer to Part 3 of this series to see what the initial values file looks like.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create or update openbao-ha-values-tls.yaml (building on your Part 3 values):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;global:
# Enable OpenShift-specific settings
openshift: true
# Required when TLS is enabled: tells the chart to use HTTPS for readiness/liveness
# probes and for in-pod API_ADDR (127.0.0.1:8200). Otherwise you get &amp;#34;client sent
# an HTTP request to an HTTPS server&amp;#34; from the probes.
tlsDisable: false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
server:
extraEnvironmentVars:
BAO_CACERT: /openbao/tls/openbao-server-tls/ca.crt &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
# High Availability configuration
ha:
enabled: true
replicas: 3
# Raft storage configuration
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener &amp;#34;tcp&amp;#34; {
tls_disable = 0 &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
address = &amp;#34;[::]:8200&amp;#34;
cluster_address = &amp;#34;[::]:8201&amp;#34;
tls_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.crt&amp;#34; &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
tls_key_file = &amp;#34;/openbao/tls/openbao-server-tls/tls.key&amp;#34; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
tls_min_version = &amp;#34;tls12&amp;#34; &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
telemetry {
unauthenticated_metrics_access = &amp;#34;true&amp;#34;
}
}
storage &amp;#34;raft&amp;#34; {
path = &amp;#34;/openbao/data&amp;#34;
retry_join {
leader_api_addr = &amp;#34;https://openbao-0.openbao-internal:8200&amp;#34; &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
leader_tls_servername = &amp;#34;openbao-0.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34; &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
}
retry_join {
leader_api_addr = &amp;#34;https://openbao-1.openbao-internal:8200&amp;#34;
leader_tls_servername = &amp;#34;openbao-1.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34;
}
retry_join {
leader_api_addr = &amp;#34;https://openbao-2.openbao-internal:8200&amp;#34;
leader_tls_servername = &amp;#34;openbao-2.openbao-internal&amp;#34;
leader_ca_cert_file = &amp;#34;/openbao/tls/openbao-server-tls/ca.crt&amp;#34;
}
}
service_registration &amp;#34;kubernetes&amp;#34; {}
telemetry {
prometheus_retention_time = &amp;#34;30s&amp;#34;
disable_hostname = true
}
route:
enabled: true
host: openbao.apps.cluster.example.com
tls: &lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;(9)&lt;/b&gt;
# Route terminates client TLS; backend can use reencrypt or passthrough
termination: reencrypt
insecureEdgeTerminationPolicy: Redirect
destinationCACertificate: | &lt;i class="conum" data-value="10"&gt;&lt;/i&gt;&lt;b&gt;(10)&lt;/b&gt;
-----BEGIN CERTIFICATE-----
# CA Certificate
-----END CERTIFICATE-----
extraVolumes: &lt;i class="conum" data-value="11"&gt;&lt;/i&gt;&lt;b&gt;(11)&lt;/b&gt;
- type: secret
name: openbao-server-tls
path: /openbao/tls
readOnly: true
extraVolumeMounts: &lt;i class="conum" data-value="12"&gt;&lt;/i&gt;&lt;b&gt;(12)&lt;/b&gt;
- name: openbao-server-tls
mountPath: /openbao/tls
readOnly: true
# Resource requests and limits
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 1000m
# Persistent volume for data
dataStorage:
enabled: true
size: 10Gi
# storageClass: &amp;#34;gp3-csi&amp;#34;
# Injector configuration
injector:
enabled: true
replicas: 2 # HA for the injector too
certs: &lt;i class="conum" data-value="13"&gt;&lt;/i&gt;&lt;b&gt;(13)&lt;/b&gt;
secretName: injector-tls
# For a private CA: set caBundle to the CA cert (PEM) so the Kubernetes API server trusts the injector webhook. E.g. oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39; | base64 -d
caBundle: &amp;#34;BASE64_ENCODED_CA_CERTIFICATE&amp;#34;
certName: tls.crt
keyName: tls.key
# UI configuration
ui:
enabled: true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;global.tlsDisable&lt;/strong&gt;: Set to false when the server listener uses TLS. This makes the chart use HTTPS for readiness/liveness probes and for the in-pod API_ADDR env var. If you leave it true (default), probes and local clients will use HTTP and you will see &amp;#34;client sent an HTTP request to an HTTPS server&amp;#34;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The environment variable BAO_CACERT is set to the CA certificate file path. This is helpful to execute the boa command inside the container.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The listener tls_disable is set to 0 to enable TLS.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The listener tls_cert_file is set to the certificate file path.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The listener tls_key_file is set to the key file path.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The listener tls_min_version is set to tls12.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The leader API address is set to the HTTPS address.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The leader CA certificate file is set to the CA certificate file path.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;9&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Route tls termination is set to reencrypt.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="10"&gt;&lt;/i&gt;&lt;b&gt;10&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The destination CA certificate is set to the CA certificate. Be sure not to add any extra lines or spaces.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="11"&gt;&lt;/i&gt;&lt;b&gt;11&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The extra volumes are mounted at the /openbao/tls path.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="12"&gt;&lt;/i&gt;&lt;b&gt;12&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The extra volume mounts are mounted at the /openbao/tls path.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="13"&gt;&lt;/i&gt;&lt;b&gt;13&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The injector certs are set to the injector TLS secret name. The caBundle must be provided in base64 format.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The secret key names must match what cert-manager writes: tls.crt and tls.key. The OpenBao Helm chart mounts each entry in extraVolumes at path + name (e.g. with path: /openbao/tls and name: openbao-server-tls the secret is mounted at /openbao/tls/openbao-server-tls/). The listener tls_cert_file and tls_key_file must use that full path.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure to change the Route host in the Helm values to the one you are using. In addition, make sure that the CA certificate is added correctly to the Route. Do not add any extra lines or spaces and do not forget the | after destinationCACertificate: and the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_6_upgrade_the_helm_release"&gt;Step 6: Upgrade the Helm Release&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Perform a Helm upgrade so the new volumes and configuration are applied. Pods will restart and pick up TLS; Raft will use HTTPS for join and replication.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;helm upgrade openbao openbao/openbao \
--namespace openbao \
--values openbao-ha-values-tls.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Watch the rollout. The first pod (openbao-0) will log &amp;#34;raft retry join initiated&amp;#34; and stay 0/1 Ready until it is initialized and unsealed (new cluster) or until it forms quorum (existing cluster).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_new_cluster_initialize_and_unseal_openbao_0"&gt;New cluster: initialize and unseal openbao-0&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;openbao-0 will not become Ready until it is initialized and unsealed. Fetch the certificate, use port-forward and talk to OpenBao over HTTPS with the CA:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# 1. Save the CA cert (for BAO_CACERT)
oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39; | base64 -d &amp;gt; openbao-ca.crt
# 2. Port-forward to openbao-0 (background)
oc port-forward openbao-0 8200:8200 -n openbao &amp;amp;
# 3. Use HTTPS and CA cert
export BAO_ADDR=&amp;#39;https://127.0.0.1:8200&amp;#39;
export BAO_CACERT=&amp;#34;$PWD/openbao-ca.crt&amp;#34;
# 4. Check status, then initialize (only once) and unseal 3 times
bao status
bao operator init -key-shares=5 -key-threshold=3 -format=json &amp;gt; openbao-init.json
bao operator unseal
bao operator unseal
bao operator unseal
bao status&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After unsealing, openbao-0 becomes leader and goes 1/1 Ready. Then start openbao-1 and openbao-2 and join or unseal them as in Part 3 (use https://).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Perform the following steps for the other pods - join the raft cluster and unseal them (3 times):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-1 -- bao operator raft join https://openbao-0.openbao-internal:8200
# 3 times with 3 different unseal keys
oc exec -ti openbao-1 -- bao operator unseal
Unseal Key (will be hidden):&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Repeat the same steps for openbao-2.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_existing_cluster_re_unseal_after_restart"&gt;Existing cluster: re-unseal after restart&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The existing cluster is already initialized. Re-unseal the pods (and re-join the raft cluster if necessary).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n openbao -w
oc exec -ti openbao-1 -- bao operator raft join https://openbao-0.openbao-internal:8200
oc exec -ti openbao-1 -- bao operator unseal # three times; same for openbao-2&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Repeat the same steps for openbao-2.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_step_7_verify_and_use_https_from_clients"&gt;Step 7: Verify and Use HTTPS from Clients&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Verify server health over HTTPS (from inside the cluster):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;curl -k https://openbao.apps.cluster.example.com/v1/sys/health | jq&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure to use the Route host in the Helm values.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;From your workstation, use the Route URL with HTTPS. If the Route host is signed by your internal CA, set BAO_CACERT to the CA file:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;export BAO_ADDR=&amp;#39;https://openbao.apps.cluster.example.com&amp;#39;
export BAO_CACERT=&amp;#39;/path/to/openbao-ca.crt&amp;#39;
bao status&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure to use the Route host in the Helm values.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Agent Injector: New pods that use the injector should start without webhook certificate errors. Check injector logs if you see TLS or certificate errors:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc logs -n openbao -l app.kubernetes.io/name=openbao-agent-injector -f&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_troubleshooting"&gt;Troubleshooting&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_mutatingwebhookconfiguration_conflict_vault_k8s"&gt;MutatingWebhookConfiguration conflict (vault-k8s)&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;conflict occurred while applying object ... MutatingWebhookConfiguration: Apply failed with 1 conflict: conflict with &amp;#34;vault-k8s&amp;#34; using ... .webhooks[name=&amp;#34;vault.hashicorp.com&amp;#34;].clientConfig.caBundle
this is a server-side apply (SSA) field ownership conflict. The OpenBao chart uses the same webhook name (vault.hashicorp.com) as the HashiCorp Vault agent injector for annotation compatibility. The clientConfig.caBundle field is still owned by a previous manager (e.g. a prior Vault Helm release or the Vault injector), so Helm cannot update it when you change the injector certificate.&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Fix option 1&lt;/strong&gt; – Delete the webhook and re-upgrade (recommended)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Remove the MutatingWebhookConfiguration so Helm can recreate it and own all fields. There will be a short window where the injector webhook is missing (new pods requesting injection may fail until the upgrade completes).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Replace RELEASE_NAME with your Helm release name (e.g. openbao)
RELEASE_NAME=openbao
oc delete mutatingwebhookconfiguration ${RELEASE_NAME}-agent-injector-cfg
# Re-run the upgrade
helm upgrade ${RELEASE_NAME} openbao/openbao \
--namespace openbao \
--values openbao-ha-values-tls.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Fix option 2&lt;/strong&gt; – Force takeover of the conflicting field&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you cannot delete the webhook (e.g. in production), take over the caBundle field with server-side apply, then run the Helm upgrade again:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Export the MutatingWebhookConfiguration, set webhooks[0].clientConfig.caBundle to your injector CA (base64 PEM from openbao-ca-secret), then re-apply with --server-side --force-conflicts:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Get the current object and the new CA bundle
oc get mutatingwebhookconfiguration openbao-agent-injector-cfg -o yaml &amp;gt; mwc.yaml
CA_BUNDLE=$(oc get secret openbao-ca-secret -n openbao -o jsonpath=&amp;#39;{.data.ca\.crt}&amp;#39;)
# Edit mwc.yaml: set .webhooks[0].clientConfig.caBundle to the value of CA_BUNDLE (no quotes in YAML).
# Then apply with force-conflicts so Helm can later manage it:
oc apply -f mwc.yaml --server-side --force-conflicts
# Re-run the Helm upgrade
helm upgrade openbao openbao/openbao --namespace openbao --values openbao-ha-values-tls.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you still have HashiCorp Vault’s agent injector installed on the same cluster, ensure only one injector is active for a given namespace (e.g. use namespace selectors) or uninstall the Vault injector to avoid two webhooks with the same name in different resources.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_route_ui_tls_bad_record_mac"&gt;Route / UI – tls: bad record MAC&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Pod logs show &lt;code&gt;tls: bad record MAC&lt;/code&gt; from the router IP. The Route is likely using &lt;strong&gt;edge&lt;/strong&gt; termination (HTTP to pod). Fix: Use reencrypt and set &lt;code&gt;destinationCACertificate&lt;/code&gt; (Step 5).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get route openbao -n openbao -o jsonpath=&amp;#39;{.spec.tls.termination}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If the OpenBao UI does not load and the pod logs show:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;http: TLS handshake error from 10.x.x.x:xxxxx: local error: tls: bad record MAC&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;the traffic is coming from the OpenShift router (the IP is typically a cluster pod IP).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Cause&lt;/strong&gt;: The Route is likely using edge termination: the router terminates TLS at the edge and sends plain HTTP to the pod. The pod expects HTTPS, so the TLS layer receives non-TLS data and reports &amp;#34;bad record MAC&amp;#34;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Use reencrypt (or passthrough) and set destinationCACertificate so the router talks HTTPS to the pod. See Step 5 above. Quick check:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get route openbao -n openbao -o jsonpath=&amp;#39;{.spec.tls.termination}&amp;#39;
# Must be &amp;#34;reencrypt&amp;#34; or &amp;#34;passthrough&amp;#34;, not &amp;#34;edge&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If the output is edge, patch the Route to reencrypt and set spec.tls.destinationCACertificate to the CA PEM (Step 5).&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_tls_handshake_127_0_0_1_certificate_errors"&gt;TLS handshake / 127.0.0.1 certificate errors&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you see in the pod logs:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;remote error: tls: bad certificate (from 127.0.0.1)&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;or when running:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-0 — bao operator raft join https://…​;: x509: cannot validate certificate for 127.0.0.1 because it doesn’t contain any IP SANs&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The server certificate does not include 127.0.0.1 (and optionally ::1) as Subject Alternative Names. Readiness/liveness probes and in-pod bao commands connect to the listener on 127.0.0.1, so the certificate must include these IP SANs.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Add ipAddresses to the OpenBao server Certificate and let cert-manager re-issue:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Edit the Certificate (or re-apply the YAML from Step 3 with ipAddresses added)
oc edit certificate openbao-server-tls -n openbao
# Add under spec:
# ipAddresses:
# - 127.0.0.1
# - &amp;#34;::1&amp;#34;
# cert-manager will issue a new cert; wait until the secret is updated
oc get certificate openbao-server-tls -n openbao
# Restart OpenBao pods so they load the new cert
oc rollout restart statefulset/openbao -n openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/platform/k8s/helm/configuration/" target="_blank" rel="noopener"&gt;OpenBao Helm configuration&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/platform/k8s/helm/examples/injector-tls-cert-manager/" target="_blank" rel="noopener"&gt;OpenBao Agent Injector TLS with cert-manager&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://cert-manager.io/docs/" target="_blank" rel="noopener"&gt;cert-manager documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - OpenShift Deployment with Helm - Part 3</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/</link><pubDate>Fri, 13 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-13-openbao-part-3-openshift-deployment/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;After understanding standalone installation in &lt;a href="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-12-openbao-part-2-standalone-installation/" target="_blank" rel="noopener"&gt;Part 2&lt;/a&gt;, it is time to deploy OpenBao on OpenShift/Kubernetes using the official Helm chart. This approach provides high availability, Kubernetes-native management, and seamless integration with the OpenShift ecosystem.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Deploying OpenBao on OpenShift/Kubernetes offers several advantages:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability&lt;/strong&gt;: Multiple replicas with automatic failover&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes-native&lt;/strong&gt;: Managed by standard Kubernetes primitives&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Persistent Storage&lt;/strong&gt;: Data survives pod restarts via PVCs&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integration&lt;/strong&gt;: Works with Kubernetes service accounts for authentication&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Easy to scale and manage&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The official &lt;a href="https://github.com/openbao/openbao-helm" target="_blank" rel="noopener"&gt;OpenBao Helm chart&lt;/a&gt; supports multiple deployment modes:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dev&lt;/strong&gt;: Single server, in-memory storage (testing only)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Standalone&lt;/strong&gt;: Single server, persistent storage&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HA&lt;/strong&gt;: Multiple servers with Raft consensus (recommended)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;External&lt;/strong&gt;: Connect to external OpenBao cluster&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_integrations"&gt;Integrations&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Currently, OpenBao supports the following integrations that help seamlessly load secrets into applications without the need to modify the application code:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agent Injector&lt;/strong&gt;: A mutating webhook that automatically injects a sidecar container that retrieves and renews secrets.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CSI Provider&lt;/strong&gt;: A (vendor neutral) CSI driver that mounts secrets as volumes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before deploying, ensure you have:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenShift 4.12+ or Kubernetes 1.30+&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Helm 3.6+&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;oc&lt;/code&gt; or &lt;code&gt;kubectl&lt;/code&gt; CLI configured&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The OpenBao CLI (&lt;code&gt;bao&lt;/code&gt;) for initialization and unsealing (see &lt;a href="https://openbao.org/docs/install/" target="_blank" rel="noopener"&gt;OpenBao installation&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cluster-admin privileges (for initial setup)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A storage class that supports ReadWriteOnce PVCs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Verify prerequisites
oc version
helm version
# Check available storage classes
oc get storageclass&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_adding_the_helm_repository"&gt;Adding the Helm Repository&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First, add the OpenBao Helm repository:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Add the OpenBao Helm repository
helm repo add openbao https://openbao.github.io/openbao-helm
# Update repository cache
helm repo update
# Search for available charts
helm search repo openbao
# Expected output:
# NAME CHART VERSION APP VERSION DESCRIPTION
# openbao/openbao 0.x.x 2.x.x Official OpenBao Helm chart&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
According to the official documentation, the Helm chart is new and under significant development. It should always be run with --dry-run before any install or upgrade to verify changes.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_creating_the_namespace"&gt;Creating the Namespace&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create a dedicated namespace for OpenBao:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
While I am using the &lt;strong&gt;oc&lt;/strong&gt; CLI, you can also use the &lt;strong&gt;kubectl&lt;/strong&gt; CLI as in-place replacement.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc new-project openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_deployment_mode_high_availability_recommended"&gt;Deployment Mode: High Availability (Recommended)&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For production environments, deploy in HA mode with Raft. This is the recommended deployment mode for production and is straightforward to achieve on Kubernetes and OpenShift.
The following values definitions will use OpenShift specific settings. I will mark them in the callouts.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We will first create a values file, deploy openbao and then discuss what must be done to activate all OpenBao pods.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_ha_values_file"&gt;Create HA Values File&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create &lt;code&gt;openbao-ha-values.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The values file is based on the official &lt;a href="https://github.com/openbao/openbao-helm/blob/main/charts/openbao/values.yaml" target="_blank" rel="noopener"&gt;values file from that chart&lt;/a&gt;, but only modified values or important changes are listed here.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;global:
# Enable OpenShift-specific settings
openshift: true &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
server:
# High Availability configuration
ha:
enabled: true &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
replicas: 3 &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
# Raft storage configuration
raft:
enabled: true &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
setNodeId: true
config: | &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
ui = true
listener &amp;#34;tcp&amp;#34; {
tls_disable = 1
address = &amp;#34;[::]:8200&amp;#34;
cluster_address = &amp;#34;[::]:8201&amp;#34;
telemetry {
unauthenticated_metrics_access = &amp;#34;true&amp;#34;
}
}
storage &amp;#34;raft&amp;#34; {
path = &amp;#34;/openbao/data&amp;#34;
retry_join {
leader_api_addr = &amp;#34;http://openbao-0.openbao-internal:8200&amp;#34;
}
retry_join {
leader_api_addr = &amp;#34;http://openbao-1.openbao-internal:8200&amp;#34;
}
retry_join {
leader_api_addr = &amp;#34;http://openbao-2.openbao-internal:8200&amp;#34;
}
}
service_registration &amp;#34;kubernetes&amp;#34; {}
telemetry {
prometheus_retention_time = &amp;#34;30s&amp;#34;
disable_hostname = true
}
route:
enabled: true &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
host: openbao.apps.cluster.example.com &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
tls:
termination: edge
# Resource requests and limits
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 1000m
# Persistent volume for data
dataStorage: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
enabled: true
size: 10Gi
# storageClass: &amp;#34;gp3-csi&amp;#34;
# Injector configuration
injector:
enabled: true
replicas: 2 # HA for the injector too &lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;(9)&lt;/b&gt;
# UI configuration
ui: &lt;i class="conum" data-value="10"&gt;&lt;/i&gt;&lt;b&gt;(10)&lt;/b&gt;
enabled: true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;OpenShift specific&lt;/strong&gt;: Activate OpenShift Mode: Critical setting, if you install on OpenShift. It adjusts the Helm chart to use Routes instead of Ingress and modifies RoleBindings to work with OpenShift’s stricter authentication.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;High Availability (HA): Deploys OpenBao as a StatefulSet rather than a Deployment.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Raft Consensus Quorum: Sets the cluster size to 3. Raft requires an &lt;strong&gt;odd number of nodes&lt;/strong&gt; to handle leader elections and avoid split-brain scenarios. This cluster can survive the loss of 1 node.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Integrated Raft Storage: Enables the internal Raft storage backend, removing the need for external dependencies like Consul or Etcd.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Server Configuration (HCL): The actual OpenBao server configuration file. Note that &lt;strong&gt;tls_disable = 1&lt;/strong&gt; is used because the OpenShift Route handles TLS termination at the edge, passing unencrypted traffic to the pod.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;OpenShift specific&lt;/strong&gt;: Route: Tells Helm to create an OpenShift Route object automatically.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;OpenShift specific&lt;/strong&gt;: Host: The external DNS address where users and applications will access the OpenBao API and UI.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Persistent Storage: Allocates a 10Gi Persistent Volume Claim (PVC) for each of the 3 pods to store encrypted data and Raft logs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;9&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Injector Redundancy: Runs 2 replicas of the sidecar injector. If the injector service is down, new application pods attempting to start with secrets will fail.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="10"&gt;&lt;/i&gt;&lt;b&gt;10&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Web UI: Enables the graphical dashboard service.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_deploy_ha_cluster"&gt;Deploy HA Cluster&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Deploy with HA values
helm install openbao openbao/openbao \
--namespace openbao \
--values openbao-ha-values.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_deployment"&gt;Verify the Deployment&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will result in the following output:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;NAME READY STATUS RESTARTS AGE
openbao-0 0/1 Running 0 60s &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
openbao-agent-injector-xxx 1/1 Running 0 60s
openbao-agent-injector-yyy 1/1 Running 0 60s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Only 1 openbao pod (instead of 3) is running, and the pod is not in &amp;#34;ready&amp;#34; state.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As you can see, only one OpenBao pod exists so far, and it is not in &amp;#34;ready&amp;#34; state. This is because OpenBao is not yet initialized and unsealed. Once that is done, the other pods will appear and join the cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The OpenBao pods show 0/1 ready because they are sealed and need initialization.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is also indicated in the logs of the openbao pod:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;2026-02-13T14:44:52.587Z [ERROR] core: failed to get raft challenge: leader_addr=http://openbao-0.openbao-internal.openbao.svc:8200
error=
| error during raft bootstrap init call: Error making API request.
|
| URL: PUT http://openbao-0.openbao-internal.openbao.svc:8200/v1/sys/storage/raft/bootstrap/challenge
| Code: 503. Errors:
|
| * Vault is sealed &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
2026-02-13T14:44:52.588Z [ERROR] core: failed to get raft challenge: leader_addr=http://openbao-2.openbao-internal.openbao.svc:8200 error=&amp;#34;error during raft bootstrap init call: Put \&amp;#34;http://openbao-2.openbao-internal.openbao.svc:8200/v1/sys/storage/raft/bootstrap/challenge\&amp;#34;: dial tcp: lookup openbao-2.openbao-internal.openbao.svc on 172.30.0.10:53: no such host&amp;#34; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
2026-02-13T14:44:52.589Z [ERROR] core: failed to get raft challenge: leader_addr=http://openbao-1.openbao-internal.openbao.svc:8200 error=&amp;#34;error during raft bootstrap init call: Put \&amp;#34;http://openbao-1.openbao-internal.openbao.svc:8200/v1/sys/storage/raft/bootstrap/challenge\&amp;#34;: dial tcp: lookup openbao-1.openbao-internal.openbao.svc on 172.30.0.10:53: no such host&amp;#34;
2026-02-13T14:44:52.589Z [ERROR] core: failed to retry join raft cluster: retry=2s err=&amp;#34;failed to get raft challenge&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Vault is sealed: This means that the OpenBao service is not yet initialized and unsealed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;No such host: This means that the OpenBao service is not yet available.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_initializing_and_unsealing_openbao"&gt;Initializing and Unsealing OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After deployment, OpenBao needs to be initialized and unsealed. This is done on the first pod. Once this is done, the other pods will appear and can join the cluster. We will create a local portforwarding to the first pod to initialize it.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_initialize_the_cluster"&gt;Initialize the Cluster&lt;/h3&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Create a local port forwarding to the first pod&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc port-forward openbao-0 8200:8200 -n openbao &amp;amp;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set environment variable&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;export BAO_ADDR=&amp;#39;http://127.0.0.1:8200&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check status&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao status&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will result in the following output:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Key Value
--- -----
Seal Type shamir
Initialized false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
Sealed true &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
Total Shares 0
Threshold 0
Unseal Progress 0/0
Unseal Nonce n/a
Version 2.5.0
Build Date 2026-02-04T16:19:33Z
Storage Type raft
HA Enabled true &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Not yet initialized&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Vault is sealed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;High Availability is enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Initialize the cluster with 5 key shares and 3 threshold&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao operator init -key-shares=5 -key-threshold=3 -format=json &amp;gt; openbao-init.json&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the file &lt;code&gt;openbao-init.json&lt;/code&gt; with the unseal keys and root token.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;cat openbao-init.json&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Take care of the &lt;code&gt;openbao-init.json&lt;/code&gt; file. It contains the unseal keys and root token!
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_unseal_pod_openbao_0"&gt;Unseal Pod openbao-0&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After initialization, we need to unseal the first pod. This is done by providing 3 different unseal keys. (threshold is 3)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Unseal openbao-0 (3 times with different keys)
bao operator unseal # Enter first key
bao operator unseal # Enter second key
bao operator unseal # Enter third key&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This unseals openbao-0, which can be verified with the command &lt;code&gt;bao status&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
Total Shares 5
Threshold 3
Version 2.5.0
Build Date 2026-02-04T16:19:33Z
Storage Type raft
Cluster Name vault-cluster-80c01167
Cluster ID b81ecb85-9751-655a-95b7-69463dd13241
HA Enabled true
HA Cluster https://openbao-0.openbao-internal:8201
HA Mode active
Active Since 2026-02-13T14:46:13.292643992Z
Raft Committed Index 29
Raft Applied Index 29&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Vault is unsealed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This makes openbao-0 ready and openbao-1 is trying to start:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 2m10s &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
openbao-1 0/1 Running 0 14s &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
openbao-agent-injector-98769cf97-r4stk 1/1 Running 0 2m12s
openbao-agent-injector-98769cf97-xldgm 1/1 Running 0 2m12s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;openbao-0 is ready&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;openbao-1 is trying to start and wants to join the Raft cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_activate_pod_openbao_1"&gt;Activate Pod openbao-1&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Since OpenBao is already initialized, we can skip the initialization step. However, we must join the &lt;strong&gt;Raft cluster&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-1 -- bao operator raft join http://openbao-0.openbao-internal:8200&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Once done, we can unseal openbao-1. We need to provide 3 different unseal keys again.
Execute the following command 3 times:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-1 -- bao operator unseal
Unseal Key (will be hidden):&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now &lt;strong&gt;openbao-1&lt;/strong&gt; is ready and openbao-2 is trying to start:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 3m48s
openbao-1 1/1 Running 0 112s
openbao-2 0/1 Running 0 18s
openbao-agent-injector-98769cf97-r4stk 1/1 Running 0 3m50s
openbao-agent-injector-98769cf97-xldgm 1/1 Running 0 3m50s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_activate_pod_openbao_2"&gt;Activate Pod openbao-2&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we need to repeat the previous steps for openbao-2.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-2 -- bao operator raft join http://openbao-0.openbao-internal:8200
# Run 3 times (enter a different unseal key each time):
oc exec -ti openbao-2 -- bao operator unseal&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_raft_cluster"&gt;Verify Raft Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can verify the Raft cluster by logging in with the root token and then checking the Raft peer list.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc exec -ti openbao-0 -- bao login&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Check the peer list:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Check Raft peer list
oc exec -ti openbao-0 -- bao operator raft list-peers
Node Address State Voter
---- ------- ----- -----
openbao-0 openbao-0.openbao-internal:8201 leader true
openbao-1 openbao-1.openbao-internal:8201 follower true
openbao-2 openbao-2.openbao-internal:8201 follower true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_accessing_the_ui"&gt;Accessing the UI&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Once unsealed, access the OpenBao UI:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_via_route_production"&gt;Via Route (Production)&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get route openbao -n openbao
# Open browser: https://openbao.apps.cluster.example.com&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part3_openbao_login_form.png?width=480px" alt="OpenBao Login Form"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. OpenBao Login Form&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Login with the root token from initialization, or with credentials once authentication is configured.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_upgrading_openbao"&gt;Upgrading OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Keep your secrets management up to date. To upgrade an existing deployment:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Update Helm repository
helm repo update
# Check available versions
helm search repo openbao --versions
# Upgrade with your values file
helm upgrade openbao openbao/openbao \
--namespace openbao \
--values openbao-ha-values.yaml
# Watch the rolling update
oc get pods -n openbao -w&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
After upgrade, pods may need to be unsealed again if they restart.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_troubleshooting"&gt;Troubleshooting&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_pods_not_starting"&gt;Pods Not Starting&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Check pod status
oc describe pod openbao-0 -n openbao
# Check pod logs
oc logs openbao-0 -n openbao
# Check events
oc get events -n openbao --sort-by=&amp;#39;.lastTimestamp&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_pvc_issues"&gt;PVC Issues&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Check PVC status
oc get pvc -n openbao
# If pending, check storage class
oc describe pvc data-openbao-0 -n openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_raft_join_failures"&gt;Raft Join Failures&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If pods cannot join the Raft cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Check internal DNS resolution
oc exec -it openbao-0 -n openbao -- nslookup openbao-internal
# Check connectivity between pods
oc exec -it openbao-0 -n openbao -- wget -O- http://openbao-1.openbao-internal:8200/v1/sys/health&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_should_be_considered_next"&gt;What Should Be Considered Next?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Securely store the unseal keys and root token&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure pod anti-affinity for true HA&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Consider auto-unseal for operational ease (upcoming article)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Put everything into a GitOps pipeline&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You now have OpenBao running on OpenShift in high-availability mode. This deployment:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Survives pod failures and restarts&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Uses Raft for distributed consensus&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Integrates with OpenShift security model&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Is ready for production use (after unsealing automation)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Key points to remember:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use HA mode for production&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Store unseal keys securely&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure pod anti-affinity for true HA&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Consider auto-unseal for operational ease (upcoming article)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/platform/k8s/helm" target="_blank" rel="noopener"&gt;OpenBao Helm Chart Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/openbao/openbao-helm" target="_blank" rel="noopener"&gt;OpenBao Helm Chart GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/platform/k8s/helm/run" target="_blank" rel="noopener"&gt;Running OpenBao on Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Hosted Control Planes behind a Proxy</title><link>https://blog.stderr.at/openshift-platform/other-topics/2025-12-15-hosted-control-planes-and-proxy/</link><pubDate>Mon, 15 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/other-topics/2025-12-15-hosted-control-planes-and-proxy/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;Recently, I encountered a problem deploying a Hosted Control Plane (HCP) at a customer site. The installation started successfully—etcd came up fine—but then it just stopped. The virtual machines were created, but they never joined the cluster. No OVN or Multus pods ever started.
The only meaningful message in the cluster-version-operator pod logs was:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-console hljs" data-lang="console"&gt;I1204 08:22:35.473783 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&amp;amp;cvo.SyncWorkerStatus{Generation:1, Failure:(*payload.UpdateError)(0xc0006313b0), Done:575, Total:623,&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This message did appear over and over again.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_problem_summary"&gt;Problem Summary&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;etcd started successfully, but the installation stalled&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API server was running&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;VMs started but never joined the cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No OVN or Multus pods ever started&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The installation was stuck in a loop&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_troubleshooting"&gt;Troubleshooting&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Troubleshooting was not straightforward. The cluster-version-operator logs provided the only clue.
However, the VMs were already running, so we could log in to them—and there it was, the reason for the stalled installation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_connect_to_a_vm"&gt;Connect to a VM&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To connect to a VM, use the &lt;code&gt;virtctl&lt;/code&gt; command. This connects you to the machine as the &lt;code&gt;core&lt;/code&gt; user:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
You can download &lt;code&gt;virtctl&lt;/code&gt; from the OpenShift Downloads page in the OpenShift web console.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
You need the SSH key for the &lt;code&gt;core&lt;/code&gt; user.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;virtctl ssh -n clusters-my-hosted-cluster core@vmi/my-node-pool-dsj7z-sss8w &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace &lt;strong&gt;my-hosted-cluster&lt;/strong&gt; with the name of your hosted cluster and &lt;strong&gt;my-node-pool-dsj7z-sss8w&lt;/strong&gt; with the name of your VM.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_problem"&gt;Verify the Problem&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Once logged in, check the &lt;code&gt;journalctl -xf&lt;/code&gt; output. In case of a proxy issue, you’ll see an error like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;&amp;gt; Dec 10 06:23:09 my-node-pool2-gwvjh-rwtx8 sh[2143]: time=&amp;#34;2025-12-10T06:23:09Z&amp;#34; level=warning msg=&amp;#34;Failed, retrying in 1s ... (3/3). Error: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:500704de3ef374e61417cc14eda99585450c317d72f454dda0dadd5dda1ba57a: pinging container registry quay.io: Get \&amp;#34;https://quay.io/v2/\&amp;#34;: dial tcp 3.209.93.201:443: i/o timeout&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;So we have a timeout when trying to pull the image from the container registry. This is a classic proxy issue. When you try &lt;code&gt;curl&lt;/code&gt; or manual pulls you will get the same message.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_configuring_the_proxy_for_the_hosted_control_plane"&gt;Configuring the Proxy for the Hosted Control Plane&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The big question is: How do we configure the proxy for the Hosted Control Plane?&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is not well documented yet—in fact, it’s barely documented at all.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Hosted Control Plane is managed through a custom resource called &lt;code&gt;HostedCluster&lt;/code&gt;, and that’s exactly where we configure the proxy.
The upstream documentation at &lt;a href="https://hypershift.pages.dev/how-to/configure-ocp-components/#overview" target="_blank" rel="noopener"&gt;HyperShift Documentation&lt;/a&gt; explains that you can add a &lt;code&gt;configuration&lt;/code&gt; section to the resource. Let’s do that:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Update the &lt;code&gt;HostedCluster&lt;/code&gt; resource with the following configuration:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;spec:
configuration:
proxy:
httpProxy: http://proxy.example.com:8080 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
httpsProxy: https://proxy.example.com:8080 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
noProxy: .cluster.local,.svc,10.128.0.0/14,127.0.0.1,localhost &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
trustedCA:
name: user-ca-bundle &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;HTTP proxy URL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;HTTPS proxy URL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Comma-separated list of domains, IPs, or CIDRs to exclude from proxy routing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;OPTIONAL: Name of a ConfigMap containing a custom CA certificate bundle&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you’re using a custom CA, create the ConfigMap beforehand or alongside the &lt;code&gt;HostedCluster&lt;/code&gt; resource. The ConfigMap must contain a key named &lt;code&gt;ca-bundle.crt&lt;/code&gt; with your CA certificate(s).
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_but_wait_there_is_more"&gt;But wait there is more…​&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you are using such a proxy, which is injecting it’s own certificate, then you probably saw the following: The &lt;strong&gt;Release Image&lt;/strong&gt; is not available when you try to deploy a new Hosted Control Plane.
The drop down in the UI is empty and you can’t select a release image.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This happens because the Pod that is trying to fetch the image is not able to connect to Github.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can test this with the following commands:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc rsh cluster-image-set-controller-XXXXX -n multicluster-engine &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
curl -v https://github.com/stolostron/acm-hive-openshift-releases.git/info/refs?service=git-upload-pack &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace &lt;strong&gt;XXXXX&lt;/strong&gt; with the name of the Pod cluster-image-set-controller in the &lt;strong&gt;multicluster-engine&lt;/strong&gt; namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Try to execute the curl command to see if you can connect to Github&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If this command fails with a certificate error, then you need to add the certificate to the Pod/Deployment.
A Configmap called &lt;strong&gt;trusted-ca-bundle&lt;/strong&gt; should already exist in the multicluster-engine namespace. If not, it must be created with the certificate chain of your Proxy (key: ca-bundle.crt)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following command will add the ConfigMap to the deployment.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc -n multicluster-engine set volume deployment/cluster-image-set-controller --add --type configmap --configmap-name trusted-ca-bundle --name trusted-ca-bundle --mount-path /etc/pki/tls/certs/ --overwrite&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Wait a couple of minutes after the Pods has been restarted. It will try to download the release images from Github again.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can check the logs of the Pod to see if the download was successful.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc logs cluster-image-set-controller-XXXXX -n multicluster-engine &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace &lt;strong&gt;XXXXX&lt;/strong&gt; with the name of the Pod cluster-image-set-controller in the &lt;strong&gt;multicluster-engine&lt;/strong&gt; namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;That should do it, you should now be able to select a release image and deploy a new Hosted Control Plane.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/other-topics/images/HCP-Release-Images.png" alt="Release Images"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. Drow Down with Release Images&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_summary"&gt;Summary&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;That’s actually everything you need for proxy configuration with Hosted Control Planes. Hopefully, the official OpenShift documentation will be updated soon to include this information.
red Hat created two tickets to track the progress of this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://issues.redhat.com/browse/ACM-10151" target="_blank" rel="noopener"&gt;ACM-10151&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://issues.redhat.com/browse/ACM-23664" target="_blank" rel="noopener"&gt;ACM-23664&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Helm Charts Repository Updates</title><link>https://blog.stderr.at/whats-new/2025-12-12-helm-charts-changelog/</link><pubDate>Fri, 12 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/whats-new/2025-12-12-helm-charts-changelog/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;This page shows the &lt;strong&gt;latest updates&lt;/strong&gt; to the &lt;a href="https://blog.stderr.at/helm-charts"&gt;stderr.at Helm Charts Repository&lt;/a&gt;.
The charts are designed for OpenShift and Kubernetes deployments, with a focus on GitOps workflows using Argo CD.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The content below is dynamically loaded from the Helm repository and always shows the most recent changes.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;div id="helm-changelog-widget" class="helm-changelog"&gt;
&lt;div class="helm-changelog-loading"&gt;
&lt;i class="fa fa-spinner fa-spin"&gt;&lt;/i&gt; Loading latest Helm chart updates...
&lt;/div&gt;
&lt;/div&gt;
&lt;style&gt;
&lt;/style&gt;
&lt;script&gt;
(function() {
const container = document.getElementById('helm-changelog-widget');
const maxItems = "10";
const cacheBuster = Math.floor(Date.now() / 60000);
fetch(`https://charts.stderr.at/changelog.json?v=${cacheBuster}`)
.then(response =&gt; {
if (!response.ok) throw new Error('Failed to load changelog');
return response.json();
})
.then(data =&gt; {
if (!data.charts || data.charts.length === 0) {
container.innerHTML = '&lt;p class="helm-changelog-error"&gt;No charts found&lt;/p&gt;';
return;
}
const genDate = new Date(data.generated);
const parseDate = (dateStr) =&gt; {
if (!dateStr) return new Date(0);
const d = new Date(dateStr);
return isNaN(d.getTime()) ? new Date(0) : d;
};
const charts = data.charts
.filter(c =&gt; c.lastModified)
.sort((a, b) =&gt; parseDate(b.lastModified).getTime() - parseDate(a.lastModified).getTime())
.slice(0, maxItems);
let html = `
&lt;div class="helm-changelog-header"&gt;
&lt;h3&gt;&lt;i class="fa fa-cubes"&gt;&lt;/i&gt; Latest Helm Chart Updates&lt;/h3&gt;
&lt;span class="helm-changelog-generated"&gt;Updated: ${genDate.toLocaleDateString()}&lt;/span&gt;
&lt;/div&gt;
`;
charts.forEach(chart =&gt; {
const date = new Date(chart.lastModified);
const dateStr = date.toLocaleDateString('en-US', {
month: 'short', day: 'numeric', year: 'numeric'
});
const iconHtml = chart.icon
? `&lt;img src="${chart.icon}" alt="" class="helm-chart-icon" width="60" height="60" data-webp-upgraded="true" onerror="this.outerHTML='&lt;i class=\\'fa fa-cube helm-chart-icon-fallback\\'&gt;&lt;/i&gt;'"&gt;`
: '&lt;i class="fa fa-cube helm-chart-icon-fallback"&gt;&lt;/i&gt;';
let changesHtml = '';
if (chart.changes &amp;&amp; chart.changes.length &gt; 0) {
const sortedChanges = [...chart.changes].reverse();
changesHtml = '&lt;ul class="helm-changes-list"&gt;';
sortedChanges.slice(0, 4).forEach(change =&gt; {
const kind = (change.kind || 'changed').toLowerCase();
changesHtml += `&lt;li class="${kind}"&gt;&lt;span class="helm-change-badge ${kind}"&gt;${kind}&lt;/span&gt;${change.description}&lt;/li&gt;`;
});
if (sortedChanges.length &gt; 4) {
changesHtml += `&lt;li style="color:#888;border-left-color:#888;"&gt;... and ${sortedChanges.length - 4} more changes&lt;/li&gt;`;
}
changesHtml += '&lt;/ul&gt;';
}
const chartUrl = chart.home || 'https://github.com/tjungbauer/helm-charts';
html += `
&lt;div class="helm-chart-item" data-href="${chartUrl}" onclick="window.open('${chartUrl}', '_blank')" role="link" tabindex="0"&gt;
&lt;div class="helm-chart-header"&gt;
${iconHtml}
&lt;span class="helm-chart-name"&gt;&lt;a href="${chartUrl}" target="_blank" rel="noopener noreferrer" class="highlight"&gt;${chart.name}&lt;/a&gt;&lt;/span&gt;
&lt;span class="helm-chart-version"&gt;v${chart.version}&lt;/span&gt;
&lt;span class="helm-chart-date"&gt;📅 ${dateStr}&lt;/span&gt;
&lt;/div&gt;
&lt;div class="helm-chart-description"&gt;${chart.description}&lt;/div&gt;
${changesHtml}
&lt;/div&gt;
`;
});
html += `
&lt;div class="helm-changelog-footer"&gt;
&lt;a href="https://github.com/tjungbauer/helm-charts" target="_blank" rel="noopener noreferrer"&gt;
&lt;i class="fa fa-github"&gt;&lt;/i&gt; View all ${data.charts.length} charts on GitHub
&lt;/a&gt;
&amp;nbsp;|&amp;nbsp;
&lt;a href="https://charts.stderr.at/" target="_blank" rel="noopener noreferrer"&gt;
&lt;i class="fa fa-external-link"&gt;&lt;/i&gt; Helm Repository
&lt;/a&gt;
&lt;/div&gt;
`;
container.innerHTML = html;
})
.catch(error =&gt; {
console.error('Helm changelog error:', error);
container.innerHTML = `
&lt;p class="helm-changelog-error"&gt;
&lt;i class="fa fa-exclamation-triangle"&gt;&lt;/i&gt;
Could not load Helm chart updates.
&lt;a href="https://github.com/tjungbauer/helm-charts" target="_blank" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;
&lt;/p&gt;
`;
});
})();
&lt;/script&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_quick_links"&gt;Quick Links&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 33.3333%;"/&gt;
&lt;col style="width: 66.6667%;"/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Resource&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Helm Repository&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;&lt;a href="https://charts.stderr.at/" class="bare"&gt;https://charts.stderr.at/&lt;/a&gt;&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;GitHub Source&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;&lt;a href="https://github.com/tjungbauer/helm-charts" class="bare"&gt;https://github.com/tjungbauer/helm-charts&lt;/a&gt;&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;ArtifactHub&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;&lt;a href="https://artifacthub.io/packages/search?repo=tjungbauer" class="bare"&gt;https://artifacthub.io/packages/search?repo=tjungbauer&lt;/a&gt;&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Example GitOps Repo&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;&lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops" class="bare"&gt;https://github.com/tjungbauer/openshift-clusterconfig-gitops&lt;/a&gt;&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability - Limit Read Access to Traces - Part 8</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-12-06-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part8/</link><pubDate>Sat, 06 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-12-06-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part8/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;In the previous articles, we deployed a distributed tracing infrastructure with TempoStack and OpenTelemetry Collector. We also deployed a Grafana instance to visualize the traces. The configuration was done in a way that allows everybody to read the traces. Every &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/#_step_3_configure_rbac_for_tempostack_trace_access_readwrite"&gt;&lt;strong&gt;system:authenticated&lt;/strong&gt;&lt;/a&gt; user is able to read &lt;strong&gt;ALL&lt;/strong&gt; traces.
This is usually not what you want. You want to limit trace access to only the appropriate namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this article, we’ll limit the read access to traces. The users of the &lt;strong&gt;team-a&lt;/strong&gt; namespace will only be able to see their own traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we begin, make sure you have:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;TempoStack deployed and configured (from &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;Part 2&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Team-a namespace with traces flowing (from &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-26-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part4/"&gt;Part 4&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A separate user for the team-a namespace.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_verify_trace_access"&gt;Verify Trace Access&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s verify what a user can see when they are authenticated.
We have &lt;strong&gt;user1&lt;/strong&gt; who is a member of the &lt;strong&gt;team-a&lt;/strong&gt; namespace. Let’s log in as this user and verify what they can see.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Observability &amp;gt; Traces&lt;/strong&gt; and select the tempostack/simplest datasource:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/observability-traces-tenanta.png" alt="Observability Traces Team-a"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should see the traces for the &lt;strong&gt;team-a&lt;/strong&gt; namespace. This is fine—that’s what we want.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;But now let’s change the tenant to &lt;strong&gt;tenantB&lt;/strong&gt; and verify what the user can see.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/observability-traces-tenantb.png" alt="Observability Traces Team-b"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As you can see, the user can see traces from the &lt;strong&gt;tenantB&lt;/strong&gt; namespace, although they are a member of the &lt;strong&gt;team-a&lt;/strong&gt; namespace. This is not what we want. We want to limit trace access to only the appropriate namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_the_original_rbac_configuration"&gt;The Original RBAC Configuration&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;Part 2&lt;/a&gt;, we created a ClusterRoleBinding to grant read access to traces for everybody who is authenticated.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tempostack-traces-reader
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: &amp;#39;system:authenticated&amp;#39;
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tempostack-traces-reader&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This ClusterRoleBinding allows &lt;strong&gt;system:authenticated&lt;/strong&gt; users to read all traces from all tenants and is bound to the ClusterRole &lt;strong&gt;tempostack-traces-reader&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tempostack-traces-reader
rules:
- verbs:
- get
apiGroups:
- tempo.grafana.com
resources:
- tenantA
- tenantB
resourceNames:
- traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is the configuration we want to change. We want to limit read access to traces to only the appropriate namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_change_the_rbac_configuration"&gt;Change the RBAC Configuration&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We will change the RBAC configuration to limit read access to traces.
To do so, we will first create a new ClusterRole for the &lt;strong&gt;team-a&lt;/strong&gt; namespace and bind it to users of that namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_new_clusterrole"&gt;Create the new ClusterRole:&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tempostack-traces-reader-team-a &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
rules:
- verbs:
- get
apiGroups:
- tempo.grafana.com
resources:
- tenantA &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
resourceNames:
- traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the ClusterRole, now with the prefix &lt;strong&gt;team-a&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Tenant that is allowed to read the traces. This time it is only the &lt;strong&gt;tenantA&lt;/strong&gt; tenant.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_new_clusterrolebinding"&gt;Create the new ClusterRoleBinding:&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tempostack-traces-reader-team-a &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
subjects:
- kind: User &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
apiGroup: rbac.authorization.k8s.io
name: user1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tempostack-traces-reader-team-a &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the ClusterRoleBinding, now with the prefix &lt;strong&gt;team-a&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The User that is allowed to read the traces. In this example: &lt;strong&gt;user1&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ClusterRole that is allowed to read the traces.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
In this example, we are using a single user. In a real-world scenario, you would most likely have a group of users. In that case, you would use a Group instead of a User.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_modify_the_original_clusterrole"&gt;Modify the Original ClusterRole:&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As a final step, we need to modify the original ClusterRole to remove &lt;strong&gt;tenantB&lt;/strong&gt; from the list of tenants that are allowed to read the traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tempostack-traces-reader
rules:
- verbs:
- get
apiGroups:
- tempo.grafana.com
resources:
- tenantB &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
resourceNames:
- traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Only &lt;strong&gt;tenantA&lt;/strong&gt; remains. Remove &lt;strong&gt;tenantB&lt;/strong&gt; from the list of resources.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This removes the permission to read traces from the &lt;strong&gt;tenantB&lt;/strong&gt; namespace for the &lt;strong&gt;system:authenticated&lt;/strong&gt; group.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Eventually, the original ClusterRoleBinding might be deleted once every user has been assigned to a separate ClusterRole.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_changes"&gt;Verify the Changes&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s see what these changes do.
As &lt;strong&gt;user1&lt;/strong&gt;, you should still be able to see the traces from the &lt;strong&gt;tenantA&lt;/strong&gt; namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/observability-traces-tenanta-user1.png" alt="Observability Traces Team-a for user1"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;But if you change the tenant to &lt;strong&gt;tenantB&lt;/strong&gt;, you should see an error message like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/observability-traces-tenantb-user1.png" alt="Observability Traces Team-b for user1"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The user will still see the list of tenants. Hopefully, this will be fixed in a future version of TempoStack.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - Standalone Installation - Part 2</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-12-openbao-part-2-standalone-installation/</link><pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-12-openbao-part-2-standalone-installation/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;In the previous article, we introduced OpenBao and its core concepts. Now it is time to get our hands dirty with a standalone installation. This approach is useful for testing, development environments, edge deployments, or scenarios where Kubernetes is not available.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While OpenBao shines in Kubernetes environments, understanding the standalone installation helps you grasp the fundamentals. This knowledge is valuable whether you are:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learning OpenBao before deploying to production&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Running OpenBao outside of Kubernetes (edge, legacy systems)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Debugging issues in containerized deployments&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setting up a development environment&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_installation_methods_overview"&gt;Installation Methods Overview&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao can be installed through multiple methods:&lt;/p&gt;
&lt;/div&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 20%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Method&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Best For&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Complexity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Package managers (apt, dnf, brew)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Production Linux/macOS systems&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Low&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Container images (Podman/Docker)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Quick testing, isolated environments&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Low&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Binary download&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Air-gapped environments&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Low&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Source compilation&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Custom builds, development&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Medium&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
For this article, we will focus on macOS for local testing with binary and container image (using Podman) and Red Hat Enterprise Linux to set up an example production-ready server.
The deployment on the Kubernetes environment will be discussed in the next article.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_method_1_package_manager_installation"&gt;Method 1: Package Manager Installation&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_rhelfedoracentos"&gt;RHEL/Fedora/CentOS&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;On RHEL or CentOS, before you can install OpenBao you need to install the EPEL repository. To do so, you first need to enable the Code Ready Repository. The following commands will do the trick. Be sure that your system is registered, in case of RHEL.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you get the error &amp;#34;Repositories disabled by configuration.&amp;#34; you need to tell the subscription manager that you want to manage the repositories. This can be done permanently or temporarily. You can use the command: &lt;strong&gt;sudo subscription-manager config --rhsm.manage_repos=1&lt;/strong&gt; to do so.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_rhel"&gt;RHEL&lt;/h4&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Enable Code Ready Repository&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo subscription-manager repos --enable codeready-builder-for-rhel-9-$(arch)-rpms&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install EPEL Repository&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install OpenBao&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo dnf install -y openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_centos"&gt;CentOS&lt;/h4&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Enable Code Ready Repository on CentOS&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo dnf config-manager --set-enabled crb&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install EPEL Repository&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo dnf install epel-release epel-next-release&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install OpenBao&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo dnf install -y openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_macos_homebrew"&gt;macOS (Homebrew)&lt;/h3&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Install OpenBao&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;brew install openbao&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_installation"&gt;Verify Installation&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After installation, verify OpenBao is available:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao version
# Output:
OpenBao v2.5.0 (bcbb6036ec2b747bceb98c7706ce9b974faa1b23), built 2026-02-04T15:57:17Z (cgo)&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The OpenBao CLI command is &lt;code&gt;bao&lt;/code&gt;, not &lt;code&gt;vault&lt;/code&gt;. This distinguishes it from HashiCorp Vault.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_start_development_server"&gt;Start Development Server&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To start a development environment, you can use the following command.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This is NOT suitable for production, it is just a test to evaluate the basic concepts of OpenBao.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao server -dev -dev-root-token-id=&amp;#34;dev-only-token&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will start the server. The UI is accessible at &lt;a href="http://localhost:8200" class="bare"&gt;http://localhost:8200&lt;/a&gt; where you can login using the root token &lt;code&gt;dev-only-token&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_method_2_container_image"&gt;Method 2: Container Image&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For quick testing or isolated environments, container images are ideal. Luckily, OpenBao offers several types of containers suitable for any environment. We will use the image hosted on &lt;strong&gt;quay.io&lt;/strong&gt;, which is based on &lt;strong&gt;RHEL UBI&lt;/strong&gt; and can be found at: &lt;a href="https://quay.io/openbao/openbao-ubi" target="_blank" rel="noopener"&gt;quay.io/openbao/openbao-ubi&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_using_podman"&gt;Using Podman&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following command will fetch the image and start the container.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This is NOT suitable for production, it is just a test to evaluate the basic concepts of OpenBao.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Run in dev mode (for testing only!)
podman run --rm -d \
--name openbao-dev \
-p 8200:8200 \
-e &amp;#39;BAO_DEV_ROOT_TOKEN_ID=dev-only-token&amp;#39; \
-e &amp;#39;BAO_DEV_LISTEN_ADDRESS=0.0.0.0:8200&amp;#39; \
quay.io/openbao/openbao-ubi:latest server -dev&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Verify it is running
podman logs -f openbao-dev&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will start the server. The UI is accessible at &lt;a href="http://localhost:8200" class="bare"&gt;http://localhost:8200&lt;/a&gt; where you can login using the root token &lt;code&gt;dev-only-token&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you prefer to use Docker, simply replace &lt;code&gt;podman&lt;/code&gt; with &lt;code&gt;docker&lt;/code&gt; in the commands.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_dev_mode_quick_start_for_testing"&gt;Dev Mode: Quick Start for Testing&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Dev mode is the fastest way to start using OpenBao for learning and testing.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The characteristics in this mode are:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In-memory storage (data lost on restart)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Automatically initialized and unsealed&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Root token printed to stdout&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;TLS disabled&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Single server (no HA)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s create an example secret and try to retrieve it again.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Authenticate against OpenBao&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;export VAULT_TOKEN=&amp;#34;dev-only-token&amp;#34;
export BAO_ADDR=&amp;#39;http://127.0.0.1:8200&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create Secret&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;curl \
--header &amp;#34;X-Vault-Token: $VAULT_TOKEN&amp;#34; \
--header &amp;#34;Content-Type: application/json&amp;#34; \
--request POST \
--data &amp;#39;{&amp;#34;data&amp;#34;: {&amp;#34;password&amp;#34;: &amp;#34;OpenBao123&amp;#34;}}&amp;#39; \
$BAO_ADDR/v1/secret/data/my-secret-password &amp;amp;&amp;amp;
echo &amp;#34;Secret written successfully.&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Retrieve Secret&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;curl --header &amp;#34;X-Vault-Token: $VAULT_TOKEN&amp;#34; \
$BAO_ADDR/v1/secret/data/my-secret-password | jq &amp;#39;.data.data&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should see the password that was created before:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{
&amp;#34;password&amp;#34;: &amp;#34;OpenBao123&amp;#34;
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_check_status_of_openbao_server"&gt;Check Status of OpenBao Server&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao status&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will give you the status of your running OpenBao instance.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 2.5.0
Build Date 2026-02-04T15:57:17Z
Storage Type inmem
Cluster Name vault-cluster-421b2431
Cluster ID 6d42dcd2-e399-211e-999a-49b1874cc8ce
HA Enabled false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_hardening_your_system"&gt;Hardening your System&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao does a good job to secure your secrets, however, memory paging (or swap) can undermine the protection. Your OS should either have swap disabled completely or encrypt the swap space.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As I am testing on macOS, the swap space is encrypted out of the box.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;However, OpenBao has documented what must be done for various operating systems at &lt;a href="https://openbao.org/docs/install/#post-installation-hardening" target="_blank" rel="noopener"&gt;Post-installation hardening&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_production_standalone_setup"&gt;Production Standalone Setup&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For a proper standalone installation, follow these steps:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_create_configuration_file"&gt;Step 1: Create Configuration File&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create &lt;code&gt;/etc/openbao.d/openbao.hcl&lt;/code&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-ini hljs" data-lang="ini"&gt;# Full configuration for standalone OpenBao server
# Cluster name for identification
cluster_name = &amp;#34;openbao-standalone&amp;#34;
# Storage backend using integrated Raft
storage &amp;#34;raft&amp;#34; {
path = &amp;#34;/var/lib/openbao/data&amp;#34;
node_id = &amp;#34;node1&amp;#34;
}
# HTTP listener (for internal communication)
listener &amp;#34;tcp&amp;#34; {
address = &amp;#34;0.0.0.0:8200&amp;#34;
cluster_address = &amp;#34;0.0.0.0:8201&amp;#34;
tls_disable = false
tls_cert_file = &amp;#34;/etc/openbao.d/tls/tls.crt&amp;#34;
tls_key_file = &amp;#34;/etc/openbao.d/tls/tls.key&amp;#34;
}
# API address for clients
api_addr = &amp;#34;https://openbao.example.com:8200&amp;#34;
# Cluster address for raft communication
cluster_addr = &amp;#34;https://openbao.example.com:8201&amp;#34;
# UI enabled
ui = true
# Logging
log_level = &amp;#34;info&amp;#34;
log_file = &amp;#34;/var/log/openbao/openbao.log&amp;#34;
# Disable memory locking (enable in production if possible)
disable_mlock = true
# Telemetry (optional)
telemetry {
prometheus_retention_time = &amp;#34;30s&amp;#34;
disable_hostname = true
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_generate_tls_certificates"&gt;Step 2: Generate TLS Certificates&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For production, you should use proper certificates. For testing, create self-signed ones:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This is only for testing purposes. In production, you should use proper certificates.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;First, prepare the TLS directory if it does not exist&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;sudo mkdir -p /etc/openbao.d/tls&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then create a configuration file for the TLS certificates:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Create TLS config
cat &amp;lt;&amp;lt;EOF &amp;gt; openbao.cnf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
CN = openbao.example.com
[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = openbao.example.com
IP.1 = 127.0.0.1 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The IP address of the server. In this case we are using localhost, but you can add your IPs here.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then generate the certificate:&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Generate private key
sudo openssl genrsa -out /etc/openbao.d/tls/tls.key 4096
# Generate certificate signing request
sudo openssl req -new -key /etc/openbao.d/tls/tls.key -out /etc/openbao.d/tls/tls.csr -subj &amp;#34;/CN=openbao.example.com&amp;#34;
# Generate self-signed certificate
sudo openssl x509 -req -days 365 -in /etc/openbao.d/tls/tls.csr -signkey /etc/openbao.d/tls/tls.key -out /etc/openbao.d/tls/tls.crt -extfile openbao.cnf -extensions v3_req
# Set permissions
sudo chown -R openbao:openbao /etc/openbao.d/tls
sudo chmod 600 /etc/openbao.d/tls/tls.key
sudo chmod 755 /etc/openbao.d/tls&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_create_systemd_service"&gt;Step 3: Create Systemd Service&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create &lt;code&gt;/etc/systemd/system/openbao.service&lt;/code&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-ini hljs" data-lang="ini"&gt;[Unit]
Description=OpenBao Secret Management
Documentation=https://openbao.org/docs
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/openbao.d/openbao.hcl
[Service]
User=openbao
Group=openbao
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/bin/bao server -config=/etc/openbao.d/openbao.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
LimitNOFILE=65536
LimitMEMLOCK=infinity
[Install]
WantedBy=multi-user.target&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_4_start_the_service"&gt;Step 4: Start the Service&lt;/h3&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Reload systemd
sudo systemctl daemon-reload
# Enable and start OpenBao
sudo systemctl enable openbao
sudo systemctl start openbao
# Check status
sudo systemctl status openbao
# View logs
sudo journalctl -u openbao -f&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_initialize_and_unseal_openbao"&gt;Initialize and Unseal OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After starting OpenBao for the first time, it needs to be initialized and unsealed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_set_environment_variables"&gt;Set Environment Variables&lt;/h3&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
We will do the following commands as root user.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure that the hostname is resolvable. In this case I am using the test domain: openbao.example.com.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Set the OpenBao address
export BAO_ADDR=&amp;#39;https://openbao.example.com:8200&amp;#39;
# If using self-signed certificates
export BAO_CACERT=&amp;#39;/etc/openbao.d/tls/tls.crt&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_check_status"&gt;Check Status&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You will see that OpenBao is running but not yet initialized. This will be the next step.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao status
Key Value
--- -----
Seal Type shamir
Initialized false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
Sealed true
Total Shares 0
Threshold 0
Unseal Progress 0/0
Unseal Nonce n/a
Version 2.4.4-1.el9
Build Date 2025-11-24
Storage Type file
HA Enabled false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Not yet initialized&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_initialize_openbao"&gt;Initialize OpenBao&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we use OpenBao, we need to initialize it. This will create the unseal keys and the root token. The unseal keys are used to … well, unseal the OpenBao service.
The root token is used as a master key to authenticate against the OpenBao service. This token has access to ALL secrets. &lt;strong&gt;Treat this root token with care.&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We will initialize OpenBao with default options of 5 key shares and a threshold of 3. This means that we need 3 different unseal keys to unseal the OpenBao service.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao operator init -key-shares=5 -key-threshold=3 -format=json &amp;gt; /root/openbao-init.json&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Store the unseal keys and root token securely! Anyone with these can access all secrets.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The output looks like:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{
&amp;#34;unseal_keys_b64&amp;#34;: [
&amp;#34;key1...&amp;#34;,
&amp;#34;key2...&amp;#34;,
&amp;#34;key3...&amp;#34;,
&amp;#34;key4...&amp;#34;,
&amp;#34;key5...&amp;#34;
],
&amp;#34;unseal_keys_hex&amp;#34;: [...],
&amp;#34;unseal_shares&amp;#34;: 5,
&amp;#34;unseal_threshold&amp;#34;: 3,
&amp;#34;recovery_keys_b64&amp;#34;: [],
&amp;#34;recovery_keys_hex&amp;#34;: [],
&amp;#34;root_token&amp;#34;: &amp;#34;sbr.xxxxxxxxxxxx&amp;#34;
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_unseal_openbao"&gt;Unseal OpenBao&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now everything is set up and we can use the unseal keys to unseal the OpenBao service.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You need to provide 3 (threshold) different unseal keys:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# First key
bao operator unseal
# Enter first unseal key
# Second key
bao operator unseal
# Enter second unseal key
# Third key
bao operator unseal
# Enter third unseal key&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After providing enough keys:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
Total Shares 5
Threshold 3
Version 2.4.4-1.el9
Build Date 2025-11-24
Storage Type file
Cluster Name openbao-standalone
Cluster ID d07874ef-df52-e45f-1723-ade333c6d609
HA Enabled false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Now unsealed and ready&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we can login with the root token to the OpenBao service.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_login_with_root_token"&gt;Login with Root Token&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Since we have not created any users yet, we will use the root token to authenticate.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# Enter the root token from initialization
bao login&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_basic_verification"&gt;Basic Verification&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let us verify the installation works:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;List enabled secrets engines&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao secrets list&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This lists the enabled secrets engines.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ cubbyhole cubbyhole_84c3d2a5 per-token private secret storage
identity/ identity identity_61809171 identity store
sys/ system system_034ae772 system endpoints used for control, policy and debugging&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;List enabled auth methods&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao auth list&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This lists the enabled authentication methods.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Path Type Accessor Description Version
---- ---- -------- ----------- -------
token/ token auth_token_c7feca17 token based credentials n/a&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a test secret&lt;/p&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao secrets enable -path=secret kv-v2&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This enables the KV (key/value) secrets engine at the path &lt;code&gt;secret&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao kv put secret/test message=&amp;#34;Hello from OpenBao&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This creates a test secret at the path &lt;code&gt;secret/test&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;bao kv get secret/test&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This retrieves the test secret from the path &lt;code&gt;secret/test&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;That’s it for the basic verification. You can now start to use OpenBao in your production environment. Let’s see what we can do in the UI.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_visiting_the_ui"&gt;Visiting the UI&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you watched the previous commands closely, you will have noticed that the UI has been enabled in the configuration file:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-ini hljs" data-lang="ini"&gt;# UI enabled
ui = true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The UI is accessible at &lt;a href="http://openbao.example.com:8200" class="bare"&gt;http://openbao.example.com:8200&lt;/a&gt;. You can login with the root token from the initialization.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part2_openbao_login_form.png?width=480px" alt="OpenBao Login Form"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. OpenBao Login Form&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Here you will see several options to explore. For now we are interested in the Secrets Engine section.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part2_openbao_secrets_engine.png?width=480px" alt="OpenBao Secrets Engine"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 2. OpenBao Secrets Engine&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We have created the path &amp;#34;secret&amp;#34; and inside it a secret called &amp;#34;test&amp;#34;.
We can retrieve it now:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part2_test_secret_retrieval.png?width=480px" alt="OpenBao Secret Retrieval"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 3. OpenBao Secret Retrieval&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_security_hardening_checklist"&gt;Security Hardening Checklist&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before using OpenBao in production you should always consider the following checklist:&lt;/p&gt;
&lt;/div&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 25%;"/&gt;
&lt;col style="width: 75%;"/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Item&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Recommendation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;TLS&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Always enable TLS with valid certificates&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Root Token&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Revoke root token after initial setup and create admin users instead&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Unseal Keys&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Distribute to different people/locations, consider auto-unseal&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Network&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Restrict access with firewall rules&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Audit&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Enable audit logging&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Backups&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Regular Raft snapshots&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Updates&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Keep OpenBao updated&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_coming_next"&gt;What is Coming Next?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In Part 3, we will deploy OpenBao on OpenShift/Kubernetes using the official Helm chart. This provides:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;High availability out of the box&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes-native management&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Integration with OpenShift security features&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Persistent storage via PVCs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You now have a working standalone OpenBao installation. This forms the foundation for understanding how OpenBao operates. While standalone mode is useful for testing and edge cases, most production deployments will use Kubernetes, which we will cover next.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Key takeaways:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenBao can run standalone or in containers&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dev mode is for testing only&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Production requires proper TLS, initialization, and security hardening&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unseal keys must be stored securely&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/install" target="_blank" rel="noopener"&gt;OpenBao Installation Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/configuration" target="_blank" rel="noopener"&gt;OpenBao Configuration Reference&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs/commands" target="_blank" rel="noopener"&gt;OpenBao CLI Reference&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>[Ep.15] OpenShift GitOps - Argo CD Agent</title><link>https://blog.stderr.at/gitopscollection/2026-01-14-argocd-agent/</link><pubDate>Wed, 14 Jan 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/gitopscollection/2026-01-14-argocd-agent/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;OpenShift GitOps based on Argo CD is a powerful tool to manage the infrastructure and applications on an OpenShift cluster. Initially, there were two ways of deployment: centralized and decentralized (or distributed). Both methods had their own advantages and disadvantages. The choice was mainly between scalability and centralization.
With OpenShift GitOps v1.19, the Argo CD Agent was finally generally available. This agent tries to solve this problem by bringing the best of both worlds together. In this quite long article, I will show you how to install and configure the Argo CD Agent with OpenShift GitOps using hub and spoke architecture.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_classic_deployment_models"&gt;Classic Deployment Models:&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Prior to the Argo CD Agent, there were two classic and often used deployment models: &lt;strong&gt;centralized&lt;/strong&gt; and &lt;strong&gt;decentralized&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_centralized_model"&gt;Centralized Model&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In a centralized deployment, all changes are applied to a single, central Argo CD instance, often installed on a management cluster. This is the traditional way of deploying Argo CD, at least in my opinion. With this model, you have a single pane of glass to manage all your clusters. You have a single UI and will see all your clusters in one place, which makes this model very convenient when you have multiple clusters. However, the scalability of this model is limited. Organizations with a huge number of clusters or Argo CD applications would hit some boundaries at some point. A sharding configuration would help, but only to a certain extent. The performance would degrade significantly. In addition, this model creates a Single Point of Failure. If this instance is down, the company loses the ability to manage their clusters through Argo CD.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
I often saw or used this model for the cluster configuration.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_decentralized_model"&gt;Decentralized Model&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In a decentralized deployment, multiple instances of Argo CD, often one for each cluster, are installed. With this approach, the issue with scalability is solved. Moreover, the Single Point of Failure is eliminated as well, since a broken instance will not affect the other instances. However, the disadvantages of this model are that the complexity for the operational teams will increase, since they need to manage multiple instances now. Also, the single pane of glass for management is lost, as there are multiple UIs to manage.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
I often saw or used this model for the application deployment.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_the_not_so_secret_argo_cd_agent"&gt;The - not so secret - Argo CD Agent&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;Argo CD Agent&lt;/strong&gt;, released as generally available in OpenShift GitOps v1.19, is a new way to use Argo CD. It tries to solve the challenges of the classic deployment models by combining the best of both worlds. The Agent allows you to have a single UI in a central control plane, while the application controller is distributed across the fleet of clusters. Agents on the different clusters will communicate with the central Argo CD instance.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Agent model introduces a hub and spoke architecture:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Control plane cluster (hub) - The control plane cluster is the central cluster that manages the configuration for multiple spokes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Workload cluster (spoke) - The workload cluster is the cluster that runs the application workloads deployed by Argo CD.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Each Argo CD Agent on a cluster manages the local Argo CD instance and ensures that applications, AppProjects, and secrets remain synchronized with their source of truth.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The official documentation describes a comparison between the classic deployment models and the Argo CD Agent: &lt;a href="https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.19/html/argo_cd_agent_architecture/argocd-agent-architecture#gitops-architecture-argocd-agent-comparison_argocd-agent-architecture-overview" target="_blank" rel="noopener"&gt;GitOps Architecture - Argo CD Agent Comparison&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_argo_cd_agent_modes"&gt;Argo CD Agent Modes&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Argo CD Agent supports two modes of operation: Managed and Autonomous.
The mode determines where the authoritative source of truth for the Application .spec field resides.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Managed mode — the control plane/hub defines Argo CD applications and their specifications.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Autonomous mode — each workload cluster/spoke defines its own Argo CD applications and their specifications.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
A mixed mode is also possible.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_managed_mode"&gt;Managed Mode&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Using the managed mode means that the control plane is the source of truth and is responsible for the Argo CD application resources and their distribution across the different workload clusters.
Any change on the hub cluster will be propagated to the spoke clusters. Any changes made on the spoke/workload cluster will be reverted to match the control plane configuration.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_autonomous_mode"&gt;Autonomous Mode&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Using this mode, the Argo CD applications are defined on the workload clusters, which serve as their own source of truth. The applications are synchronized back to the control plane for observability.
Changes made on the workload cluster are not reverted, but will appear on the control plane. On the other hand, you cannot modify applications directly from the control plane.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_security"&gt;Security&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Argo CD Agent uses mTLS certificates to communicate between the hub and the spoke clusters.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The certificate must be created and managed by the user.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_argo_cd_agent_installation"&gt;Argo CD Agent Installation&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The agent consists of two components that are responsible to synchronize the Argo CD applications between the hub and the spoke clusters.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Principal - Deployed on the control plane cluster together with Argo CD. Here the central UI (and API) can be found.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Agent - Deployed on the workload clusters to synchronize the Argo CD applications.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The installation of both is done differently. But before we dive into the installation, let’s have a look at the terminology to understand the different components and their roles.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is a quote from the official documentation (&lt;a href="https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.19/html/argo_cd_agent_installation/argocd-agent-installation#gitops-argocd-agent-terminologiest_argocd-agent-installation" target="_blank" rel="noopener"&gt;Argo CD Agent Terminologies&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Principal namespace&lt;/strong&gt; - Specifies the namespace where you install the Principal component. This namespace is not created by default, you must create it before adding the resources in this namespace. In Argo CD Agent CLI commands, this value is provided using the --principal-namespace flag.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agent namespace&lt;/strong&gt; - Specifies the namespace hosting the Agent component. This namespace is not created by default, you must create it before adding the resources in this namespace. In Argo CD Agent CLI commands, this value is provided using the --agent-namespace flag.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Context&lt;/strong&gt; - A context refers to a named configuration in the oc CLI that allows you to switch between different clusters. You must be logged in to all clusters and assign distinct context names for the hub and spoke clusters. Examples for cluster names include principal-cluster, hub-cluster, managed-agent-cluster, or autonomous-agent-cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Principal context&lt;/strong&gt; - The context name you provide for the hub (control plane) cluster. For example, if you log in to the hub cluster and rename its context to principal-cluster, you specify it in Argo CD Agent CLI commands as --principal-context principal-cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agent context&lt;/strong&gt; - The context name you provide for the spoke (workload) cluster. For example, if you log in to a spoke cluster and rename its context to autonomous-agent-cluster, you specify it in Argo CD Agent CLI commands as --agent-context autonomous-agent-cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_a_word_about_the_setup"&gt;A Word about the Setup&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To create some kind of real customer scenario, I have created two clusters:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;The cluster where we installed the Principal component. This will be the Hub/Management/Principal cluster (We have too many words for this…​)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A separate cluster that will be the Agent cluster. This will be the Spoke/Workload cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The first one I have installed on AWS, and the second one is a Bare Metal Single node cluster. Both can reach each other via the Internet.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
We are using different contexts for the different clusters for the command line. The &lt;strong&gt;argocd-agentctl&lt;/strong&gt; tool knows the flags &lt;strong&gt;--principal-context&lt;/strong&gt; and &lt;strong&gt;--agent-context&lt;/strong&gt; to switch between the different clusters. Be sure to create the resources on the correct cluster.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we start with the installation of the Principal or Agent component, we need to ensure that the following prerequisites are met:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We have two OpenShift test clusters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Both clusters can reach each other&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenShift GitOps Operator is already installed (possible configuration modifications are described in the following sections)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Because of my main focus on OpenShift GitOps, we will try to deploy cluster configurations and not just workload. Therefore, the example Argo CD applications will configure cluster settings. (A banner on the top and bottom of the UI)
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_configure_openshift_gitops_subscription_on_the_hub_cluster"&gt;Configure OpenShift GitOps Subscription on the Hub Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The OpenShift GitOps Operator installs by default an Argo CD instance. In this test we will disable this, as we do not need that instance. Moreover and even more important, we need to tell the Operator for which namespaces it should feel responsible for. In this case, we will tell the Operator to be responsible for all namespaces on the cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We need to modify the Subscription &lt;strong&gt;openshift-gitops-operator&lt;/strong&gt; and add the following environment variables:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;spec:
config:
env:
- name: DISABLE_DEFAULT_ARGOCD_INSTANCE &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
value: &amp;#39;true&amp;#39;
- name: ARGOCD_CLUSTER_CONFIG_NAMESPACES &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
value: &amp;#39;*&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Optional: Disable the default Argo CD instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Tell the Operator for which namespaces it should feel responsible for. In this case, all Namespaces. This is important for a namespace-scoped Argo CD instance, which we will install in the next step.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_activate_the_principal_component"&gt;Activate the Principal Component&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To activate the Principal components we first need a cluster (the Hub) where OpenShift GitOps is installed. On this cluster, there might be a running instance of Argo CD already.
However, at this time an existing Argo CD &lt;strong&gt;cannot&lt;/strong&gt; be used. Instead, a new Argo CD instance must be created. (This is because the controller must not be activated)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In my case, I will install the Principal component in the namespace &lt;strong&gt;argocd-principal&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To create an Argo CD instance we need to create the following configuration:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
At this very stage, the principal component must be installed in a separate Argo CD instance, since the controller must not be activated. Therefore we create a new Argo CD instance in a new namespace.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
name: hub-argocd
namespace: argocd-principal
spec:
controller:
enabled: false &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
argoCDAgent:
principal:
enabled: true &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
auth: &amp;#34;mtls:CN=([^,]+)&amp;#34; &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
logLevel: &amp;#34;info&amp;#34;
namespace:
allowedNamespaces: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
- &amp;#34;*&amp;#34;
tls:
insecureGenerate: false &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
jwt:
insecureGenerate: false
sourceNamespaces: &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
- &amp;#34;argocd-agent-bm01&amp;#34;
server:
route:
enabled: true &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Disable the controller component.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Enable the Principal component.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Authentication method for the Principal component.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Allowed namespaces for the Principal component.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Insecure generation of the TLS certificate.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Specifies the sourceNamespaces configuration. (Such a list might already exist)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Enable the Route for the Principal component.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will start a Pod called &lt;strong&gt;hub-gitops-agent-principal&lt;/strong&gt; in the namespace &lt;strong&gt;argocd-principal&lt;/strong&gt;. However, this pod will &lt;strong&gt;fail&lt;/strong&gt; at this moment and that is fine.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Pod hub-gitops-agent-principal is failing with:
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;msg&amp;#34;:&amp;#34;Setting loglevel to info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2026-01-19T13:45:57Z&amp;#34;}
time=&amp;#34;2026-01-19T13:45:57Z&amp;#34; level=info msg=&amp;#34;Loading gRPC TLS certificate from secret argocd-principal/argocd-agent-principal-tls&amp;#34;
time=&amp;#34;2026-01-19T13:45:57Z&amp;#34; level=info msg=&amp;#34;Loading root CA certificate from secret argocd-principal/argocd-agent-ca&amp;#34;
time=&amp;#34;2026-01-19T13:45:57Z&amp;#34; level=info msg=&amp;#34;Loading resource proxy TLS certificate from secrets argocd-principal/argocd-agent-resource-proxy-tls and argocd-principal/argocd-agent-ca&amp;#34;
[FATAL]: Error reading TLS config for resource proxy: error getting proxy certificate: could not read TLS secret argocd-principal/argocd-agent-resource-proxy-tls: secrets &amp;#34;argocd-agent-resource-proxy-tls&amp;#34; not found &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The secret is not yet available.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The Pod is failing at the moment because the different secrets for authentication, are not yet available. The Secrets are created in a later step, because some settings, such as the principal hostname and resource proxy service names, are available only after the Red Hat OpenShift GitOps Operator enables the principal component.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;At this point the Operator created the Route object already:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;Route: hub-gitops-agent-principal
Hostname: https://hub-gitops-agent-principal-argocd-principal.apps.ocp.aws.ispworld.at
Service: hub-gitops-agent-principal&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_configure_the_appproject"&gt;Configure the AppProject&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you configured the AppProject with sourceNamespaces, you need to add the following to the AppProject (for example to the &lt;strong&gt;default&lt;/strong&gt; AppProject). This must match exactly the namespaces you have created for the Agent.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;spec:
sourceNamespaces:
- &amp;#34;argocd-agent-bm01&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can also use this patch command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc patch appproject default -n argocd-principal --type=&amp;#39;merge&amp;#39; \
-p &amp;#39;{&amp;#34;spec&amp;#34;: {&amp;#34;sourceNamespaces&amp;#34;: [&amp;#34;argocd-agent-bm01&amp;#34;]}}&amp;#39; --context aws&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Restart the Argo CD Pods to apply the changes.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_download_argocd_agentctl"&gt;Download argocd-agentctl&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To create the required secrets, we need to download the &lt;strong&gt;argocd-agentctl&lt;/strong&gt; tool.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This can be found at: &lt;a href="https://developers.redhat.com/content-gateway/rest/browse/pub/cgw/openshift-gitops/" class="bare"&gt;https://developers.redhat.com/content-gateway/rest/browse/pub/cgw/openshift-gitops/&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Download and install it for your platform.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_required_secrets"&gt;Create Required Secrets&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following steps will create the required secrets for the Principal component. In this example, we will create our own CA and certificates. This is suitable for development and testing purposes. For production environments, you should use certificates issued by your organization’s PKI or a trusted certificate authority.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Use your company’s CA and certificates for production environments.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_initialize_the_certificate_authority_ca"&gt;Initialize the Certificate Authority (CA)&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To create a certificate authority (CA) that signs other certificates, we need to run the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl pki init \
--principal-namespace argocd-principal \ &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
--principal-context aws&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the Principal component is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will initialize the CA and store it in the secret &lt;strong&gt;argocd-principal/argocd-agent-ca&lt;/strong&gt;. The certificate looks like:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;&amp;#34;Certificate Information:
Common Name: argocd-agent-ca
Subject Alternative Names:
Organization: DO NOT USE IN PRODUCTION
Organization Unit:
Locality:
State:
Country:
Valid From: January 15, 2026
Valid To: January 15, 2036
Issuer: argocd-agent-ca, DO NOT USE IN PRODUCTION
Serial Number: 1 (0x1)&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_generate_service_certificate_for_the_principal"&gt;Generate Service Certificate for the Principal&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To generate the server certificate for the Principal’s gRPC service, run the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl pki issue principal \
--principal-namespace argocd-principal \
--principal-context aws \
--dns &amp;#34;&amp;lt;YOUR PRINCIPAL HOSTNAME&amp;gt;&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The hostname of the Principal service. This must match with the hostname of the Principal’s route (spec.host) or, in case a LoadBalancer Service is used, with .status.loadBalancer.ingress.hostname.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_generate_the_resource_proxy_certificate"&gt;Generate the Resource Proxy Certificate&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The resource proxy service requires a certificate as well. Since the proxy will run on the same cluster as the Principal, we can use the service name directly.
This is generated by the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl pki issue resource-proxy \
--principal-namespace argocd-principal \
--principal-context aws \
--dns hub-argocd-agent-principal-resource-proxy &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The service name for the resource-proxy. This must match with the service name of the Resource Proxy service.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_generate_the_jwt_signing_key"&gt;Generate the JWT Signing Key&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Generate the RSA private key for the JWT signing key by running the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl jwt create-key \
--principal-namespace argocd-principal \
--principal-context aws&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will generate the RSA private key and store it in the secret &lt;strong&gt;argocd-principal/argocd-agent-jwt&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_principal_component"&gt;Verify the Principal Component&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now the principal pod should be running successfully.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If the Pod still shows an error, wait a few moments or restart the Pod.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the logs you should see the following:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;msg&amp;#34;:&amp;#34;Setting loglevel to info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2026-01-19T14:09:31Z&amp;#34;}
time=&amp;#34;2026-01-19T14:09:31Z&amp;#34; level=info msg=&amp;#34;Loading gRPC TLS certificate from secret argocd-principal/argocd-agent-principal-tls&amp;#34;
time=&amp;#34;2026-01-19T14:09:31Z&amp;#34; level=info msg=&amp;#34;Loading root CA certificate from secret argocd-principal/argocd-agent-ca&amp;#34;
time=&amp;#34;2026-01-19T14:09:31Z&amp;#34; level=info msg=&amp;#34;Loading resource proxy TLS certificate from secrets argocd-principal/argocd-agent-resource-proxy-tls and argocd-principal/argocd-agent-ca&amp;#34;
time=&amp;#34;2026-01-19T14:09:31Z&amp;#34; level=info msg=&amp;#34;Loading JWT signing key from secret argocd-principal/argocd-agent-jwt&amp;#34;
time=&amp;#34;2026-01-19T14:09:31Z&amp;#34; level=info msg=&amp;#34;Starting argocd-agent (server) v99.9.9-unreleased (ns=argocd-principal, allowed_namespaces=[*])&amp;#34; module=server&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This concludes the configuration of the Principal component. There are a lot of steps to create the required secrets, but this is only done once. A GitOps-friendly way to achieve this might be done using a Kubernetes Job (if you consider this as GitOps-friendly…​ which I do).&lt;/p&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_activate_the_agent_component"&gt;Activate the Agent Component&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After the Principal component is configured, you can activate one or more Agents (spoke or workload clusters) and connect them with the Hub.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The prerequisites are:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Principal component is configured and running.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You have access to both the Principal and Agent clusters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The argocd-agentctl CLI tool is installed and accessible from your environment.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The helm CLI is installed and configured. Ensure that the helm CLI version is later than v3.8.0.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenShift GitOps Operator is installed and configured on the Agent cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Yes, a separate Helm Chart will be used to install the Agent component on the target cluster. This time it is not a Chart that I created, but one provided by Red Hat. :)
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_agent_secret_on_principal_cluster"&gt;Create Agent Secret on Principal Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We first need to create an agent on the Principal cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl agent create &amp;#34;argocd-agent-bm01&amp;#34; \ &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
--principal-context &amp;#34;aws&amp;#34; \ &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
--principal-namespace &amp;#34;argocd-principal&amp;#34; \ &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
--resource-proxy-server &amp;#34;hub-argocd-agent-principal-resource-proxy:9090&amp;#34; &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;A (unique) name for the Agent.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The context name for the Principal cluster. In my case it is &amp;#34;aws&amp;#34;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the Principal component is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The resource proxy server URL. This is the URL of the Principal’s resource proxy service including the port (9090).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the secret &lt;strong&gt;cluster-argocd-agent-bm01&lt;/strong&gt; with the label &lt;strong&gt;argocd.argoproj.io/secret-type: cluster&lt;/strong&gt; in the Argo CD namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_agent_namespace_on_the_agent_cluster"&gt;Create the Agent Namespace on the Agent Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Be sure that the target namespace on the agent or workload cluster exists. If not, create it first.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc create namespace argocd-agent-bm01 --context bm &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the namespace.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_propagate_the_principal_ca_to_the_agent_cluster"&gt;Propagate the Principal CA to the Agent Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To copy the CA certificate from the principal to the agent cluster the following command is used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl pki propagate \
--agent-context bm \ &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
--principal-context aws \ &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
--principal-namespace argocd-principal \ &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
--agent-namespace argocd-agent-bm01 &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The context name for the Agent cluster.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The context name for the Principal cluster.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the Principal component is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the Agent component is running.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will copy the CA certificate from the Principal cluster to the Agent cluster into the namespace and secret &lt;strong&gt;argocd-agent-bm01/argocd-agent-ca&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Only the certificate is copied. The private key is not copied.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_generate_a_client_certificate_for_the_agent"&gt;Generate a client certificate for the Agent&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we need to create, based on the imported CA, a client certificate for the Agent. This is done by the following command:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;argocd-agentctl pki issue agent &amp;#34;argocd-agent-bm01&amp;#34; \
--principal-context &amp;#34;aws&amp;#34; \
--agent-context &amp;#34;bm&amp;#34; \
--agent-namespace &amp;#34;argocd-agent-bm01&amp;#34; \
--principal-namespace &amp;#34;argocd-principal&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the secret &lt;strong&gt;argocd-agent-client-tls&lt;/strong&gt; on the workload cluster, containing a certificate and a key, signed by the CA certificate imported from the Principal cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_configure_openshift_gitops_subscription_on_the_spoke_cluster"&gt;Configure OpenShift GitOps Subscription on the Spoke Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The OpenShift GitOps Operator installs by default an Argo CD instance. In this test we will disable this, as we do not need that instance. Moreover and even more important, we need to tell the Operator for which namespaces it should feel responsible for. In this case, we will tell the Operator to be responsible for all namespaces on the cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We need to modify the Subscription &lt;strong&gt;openshift-gitops-operator&lt;/strong&gt; and add the following environment variables:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;spec:
config:
env:
- name: DISABLE_DEFAULT_ARGOCD_INSTANCE &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
value: &amp;#39;true&amp;#39;
- name: ARGOCD_CLUSTER_CONFIG_NAMESPACES &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
value: &amp;#39;*&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Disable the default Argo CD instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Tell the Operator for which namespaces it should feel responsible for. In this case, all Namespaces.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_argo_cd_instance_on_the_agent_cluster"&gt;Create Argo CD Instance on the Agent Cluster&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To create a minimalistic Argo CD instance on the Agent cluster, we can use the following Argo CD configuration:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
name: agent-argocd
namespace: argocd-agent-bm01 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
server:
enabled: false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace where the Argo CD instance is running. This is also the name of the Agent we have created earlier on the principal cluster.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create a lightweight Argo CD instance in the namespace &lt;strong&gt;argocd-agent-bm01&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pod -n argocd-agent-bm01 --context bm
NAME READY STATUS RESTARTS AGE
agent-argocd-application-controller-0 1/1 Running 0 2m49s
agent-argocd-redis-5f6759f6fb-2fdnt 1/1 Running 0 2m49s
agent-argocd-repo-server-7949d97dfd-dsk6b 1/1 Running 0 2m49s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_installing_the_agent"&gt;Installing the Agent&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To install the agent we will use a Helm Chart provided by Red Hat. This will install the Agent component on the target cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As a reminder, we have two modes of operation for the Agent:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Managed mode&lt;/strong&gt; — the control plane/hub defines Argo CD applications and their specifications.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Autonomous mode&lt;/strong&gt; — each workload cluster/spoke defines its own Argo CD applications and their specifications.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_required_network_policy"&gt;Create Required Network Policy&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we start with the actual installation of the Agent, we need to ensure that the Redis instance on the spoke cluster is accessible for the Agent. We need to create a NetworkPolicy accordingly:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: argocd-agent-bm01-redis-network-policy
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: agent-argocd-redis &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
ingress:
- ports:
- protocol: TCP
port: 6379
from:
- podSelector:
matchLabels:
app.kubernetes.io/name: argocd-agent-agent
policyTypes:
- Ingress&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the Redis instance. The label is based on &amp;lt;instance name&amp;gt;-redis.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Apply the NetworkPolicy to the spoke cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc apply -f network-policy.yaml -n argocd-agent-bm01 --context bm&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_add_the_helm_chart_repository"&gt;Add the Helm Chart Repository&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Add the Helm repository:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;helm repo add openshift-helm-charts https://charts.openshift.io/
helm repo update&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_install_a_managed_agent_with_the_helm_chart"&gt;Install a managed Agent with the Helm Chart&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Install the agent in the &lt;strong&gt;managed&lt;/strong&gt; mode using the Helm Chart. The following parameters are used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;namespaceOverride&lt;/strong&gt; - The namespace where the Agent is running.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;agentMode&lt;/strong&gt; - The mode of the Agent.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;server&lt;/strong&gt; - The server URL of the Principal component. This is the spec.host setting of the Principal’s route.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argoCdRedisSecretName&lt;/strong&gt; - The name of the Redis secret.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argoCdRedisPasswordKey&lt;/strong&gt; - The key of the Redis password.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;redisAddress&lt;/strong&gt; - The address of the Redis instance.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;helm install redhat-argocd-agent openshift-helm-charts/redhat-argocd-agent \
--set namespaceOverride=argocd-agent-bm01 \
--set agentMode=&amp;#34;managed&amp;#34; \
--set server=&amp;#34;serverURL of principal route&amp;#34; \
--set argoCdRedisSecretName=&amp;#34;agent-argocd-redis-initial-password&amp;#34; \
--set argoCdRedisPasswordKey=&amp;#34;admin.password&amp;#34; \
--set redisAddress=&amp;#34;agent-argocd-redis:6379&amp;#34; \
--kube-context &amp;#34;bm&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;With this chart a pod on the &lt;strong&gt;spoke&lt;/strong&gt; cluster will be created and will start to synchronize the Argo CD applications between the hub and the spoke cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_managed_agent"&gt;Verify the Managed Agent&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To verify the Agent in &lt;strong&gt;managed&lt;/strong&gt; mode we need to create an Argo CD Application on the &lt;strong&gt;hub&lt;/strong&gt; cluster. We can try the following Application. The Application is taken from the &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops" target="_blank" rel="noopener"&gt;openshift-clusterconfig-gitops&lt;/a&gt; repository and simply adds a banner to the top of the OpenShift UI. I typically use this as a quick test if GitOps is working.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: branding
namespace: argocd-agent-bm01 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
destination:
namespace: default
server: &amp;#39;https://hub-argocd-agent-principal-resource-proxy:9090?agentName=argocd-agent-bm01&amp;#39; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
project: default
source:
path: clusters/management-cluster/branding
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39;
targetRevision: main
syncPolicy:
automated:
prune: true
selfHeal: true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace of the agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The server URL of the Principal component. This is the URL plus the port and the agentName. As an alternative you can also use &lt;strong&gt;name: argocd-agent-bm01&lt;/strong&gt; which is the name of the cluster and might be easier to read.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Apply the Application to the &lt;strong&gt;hub&lt;/strong&gt; cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc apply -f application.yaml -n argocd-agent-bm01 --context aws&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Since the application will try to automatically synchronize the configuration, the status will change to &lt;strong&gt;Synced&lt;/strong&gt; after a few seconds:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;status:
resources:
- group: console.openshift.io
kind: ConsoleNotification
name: topbanner
status: Synced
version: v1&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;and the (top) banner will be visible on the UI:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/gitopscollection/images/agent/banner-top.png" alt="Banner on the top of the UI"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. Banner on the top of the UI&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;On the &lt;strong&gt;hub&lt;/strong&gt; cluster, the Argo CD Application will be Synced:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get applications --context aws -A
NAMESPACE NAME SYNC STATUS HEALTH STATUS
argocd-agent-bm01 branding-banner Synced Healthy&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_install_an_autonomous_agent_with_the_helm_chart"&gt;Install an autonomous Agent with the Helm Chart&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s cleanup the first installation of the Chart (managed agent) in order to not have any conflicts.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;helm uninstall redhat-argocd-agent --kube-context &amp;#34;bm&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Install the agent in the &lt;strong&gt;autonomous&lt;/strong&gt; mode using the Helm Chart. The following parameters are used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;namespaceOverride&lt;/strong&gt; - The namespace where the Agent is running.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;agentMode&lt;/strong&gt; - The mode of the Agent.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;server&lt;/strong&gt; - The server URL of the Principal component. This is the spec.host setting of the Principal’s route.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argoCdRedisSecretName&lt;/strong&gt; - The name of the Redis secret.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argoCdRedisPasswordKey&lt;/strong&gt; - The key of the Redis password.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;redisAddress&lt;/strong&gt; - The address of the Redis instance.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The only difference to the managed mode is the &lt;strong&gt;agentMode&lt;/strong&gt; parameter.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;helm install redhat-argocd-agent-autonomous openshift-helm-charts/redhat-argocd-agent \
--set namespaceOverride=argocd-agent-bm01 \
--set agentMode=&amp;#34;autonomous&amp;#34; \
--set server=&amp;#34;serverURL of principal route&amp;#34; \
--set argoCdRedisSecretName=&amp;#34;agent-argocd-redis-initial-password&amp;#34; \
--set argoCdRedisPasswordKey=&amp;#34;admin.password&amp;#34; \
--set redisAddress=&amp;#34;agent-argocd-redis:6379&amp;#34; \
--kube-context &amp;#34;bm&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;With this chart a pod on the &lt;strong&gt;spoke&lt;/strong&gt; cluster will be created and will start to synchronize the Argo CD applications between the hub and the spoke cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_autonomous_agent"&gt;Verify the Autonomous Agent&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To verify the Agent in &lt;strong&gt;autonomous&lt;/strong&gt; mode we need to create an Argo CD Application on the &lt;strong&gt;spoke&lt;/strong&gt; cluster.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We can try the following Application. It is basically the same as we used for the test for the managed mode, except that this time we will add a banner on the bottom of the UI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: branding-bottom-banner
namespace: argocd-agent-bm01 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
destination:
namespace: default
server: &amp;#39;https://kubernetes.default.svc&amp;#39; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
project: default
source:
path: clusters/management-cluster/branding-bottom
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39;
targetRevision: main
syncPolicy:
automated:
prune: true
selfHeal: true&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace of the agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The server URL of the local cluster, since in autonomous mode the application is managed locally.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Apply the Application to the &lt;strong&gt;spoke&lt;/strong&gt; cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc apply -f application.yaml -n argocd-agent-bm01 --context bm&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Like the managed mode, the status will change to &lt;strong&gt;Synced&lt;/strong&gt; after a few seconds and the (this time) bottom banner will be visible on the UI:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/gitopscollection/images/agent/banner-bottom.png" alt="Banner on the bottom of the UI"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 2. Banner on the bottom of the UI&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Moreover, the Argo CD Application will appear on the &lt;strong&gt;hub&lt;/strong&gt; cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get applications --context aws -A
NAMESPACE NAME SYNC STATUS HEALTH STATUS
argocd-agent-bm01 branding-bottom-banner Synced Healthy
argocd-agent-bm01 branding-banner Synced Healthy&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_troubleshooting"&gt;Troubleshooting&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;During the installation and configuration of the Argo CD Agent, you might encounter some issues. Here are some issues I encountered during the tests:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_principal_pod_fails_to_start"&gt;Principal Pod Fails to Start&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If the Principal pod fails to start with errors about missing secrets, verify that all required secrets have been created:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get secrets -n argocd-principal | grep argocd-agent&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should see the following secrets:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argocd-agent-ca&lt;/strong&gt; - The CA certificate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argocd-agent-principal-tls&lt;/strong&gt; - The Principal’s TLS certificate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argocd-agent-resource-proxy-tls&lt;/strong&gt; - The Resource Proxy’s TLS certificate&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;argocd-agent-jwt&lt;/strong&gt; - The JWT signing key&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If any of these are missing, re-run the corresponding &lt;code&gt;argocd-agentctl&lt;/code&gt; command to create them.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_redis_errors_in_the_principal_pod"&gt;Redis Errors in the Principal Pod&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When you see errors like the following in the logs of the Principal pod, ensure that the Argo CD instance does not have the controller enabled in that namespace (set &lt;code&gt;spec.controller.enabled: false&lt;/code&gt;). Hopefully, this will change in the future.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;time=&amp;#34;2026-01-20T04:57:10Z&amp;#34; level=error msg=&amp;#34;unexpected lack of &amp;#39;_&amp;#39; namespace/name separate: &amp;#39;app|managed-resources|branding|1.8.3&amp;#39;&amp;#34; connUUID=3c13b30b-9f84-4af6-93d8-e1c03c4c7898 function=redisFxn module=redisProxy&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_limitations"&gt;Limitations&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While the Argo CD Agent brings significant improvements, there are some limitations to be aware of:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_separate_argo_cd_instance_required"&gt;Separate Argo CD Instance Required&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Currently, the Principal component &lt;strong&gt;cannot&lt;/strong&gt; be installed alongside an existing Argo CD instance where the application controller is enabled. You must create a separate Argo CD instance with the controller disabled (&lt;code&gt;spec.controller.enabled: false&lt;/code&gt;). To me, this is one of the biggest limitations. However, this will be addressed in the future and is tracked in the issue: &lt;a href="https://github.com/argoproj-labs/argocd-agent/issues/708" class="bare"&gt;https://github.com/argoproj-labs/argocd-agent/issues/708&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_manual_certificate_management"&gt;Manual Certificate Management&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The mTLS certificates must be created and managed manually by the user. There is no automatic certificate rotation or renewal. For production environments, you should integrate with your organization’s PKI infrastructure and implement a certificate rotation strategy.&lt;/p&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_summary"&gt;Summary&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Argo CD Agent provides a powerful solution for managing multiple clusters with Argo CD. By combining the benefits of centralized management with distributed application controllers, it addresses the scalability and single point of failure challenges of traditional deployment models.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While the initial setup requires several steps, especially around certificate management, the resulting architecture offers a robust foundation for GitOps at scale.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability - Here Comes Grafana - Part 7</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-12-04-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part7/</link><pubDate>Thu, 04 Dec 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-12-04-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part7/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;While we have been using the integrated tracing UI in OpenShift, it is time to summon &lt;strong&gt;Grafana&lt;/strong&gt;. Grafana is a visualization powerhouse that allows teams to build custom dashboards, correlate traces with logs and metrics, and gain deep insights into their applications. In this article, we’ll deploy a dedicated Grafana instance for &lt;strong&gt;team-a&lt;/strong&gt; in their namespace, configure a Tempo datasource, and create a dashboard to explore distributed traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before we begin, make sure you have:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Grafana Operator installed cluster-wide (we’ll cover this first)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;TempoStack deployed and configured (from &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;Part 2&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Team-a namespace with traces flowing (from &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-26-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part4/"&gt;Part 4&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_the_grafana_operator"&gt;The Grafana Operator&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Grafana Operator provides Custom Resource Definitions (CRDs) for managing Grafana instances, datasources, and dashboards declaratively.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Be sure that the Operator is installed. Typically, it is installed in the &lt;code&gt;openshift-operators&lt;/code&gt; namespace. If you keep that namespace, all you need to do is create a Subscription. Otherwise you will also need to create an OperatorGroup.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you select a different namespace, be sure to verify possible RBAC bindings, that you might need to set up.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_install_subscription"&gt;Install Subscription&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Grafana Operator is available from the OperatorHub. Either use the UI to install it, or simply create a Subscription to install it:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: grafana-operator
namespace: openshift-operators &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
channel: v5 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
installPlanApproval: Automatic
name: grafana-operator
source: community-operators &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
sourceNamespace: openshift-marketplace&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Installing in &lt;code&gt;openshift-operators&lt;/code&gt; makes the operator available cluster-wide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Use the v5 channel for the operator. This is the only available channel currently.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The source is the OperatorHub catalog. Grafana is a &lt;strong&gt;community&lt;/strong&gt; operator.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Verify the operator is running:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n openshift-operators -l app.kubernetes.io/name=grafana-operator&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should see output similar to:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;NAME READY STATUS RESTARTS AGE
grafana-operator-7d8f9c6b5-xyz12 1/1 Running 0 2m&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_create_grafana_instance_for_team_a"&gt;Create Grafana Instance for Team-A&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now let’s deploy a Grafana instance in the &lt;strong&gt;team-a namespace&lt;/strong&gt;. This gives the team full control over their dashboards while isolating them from other teams.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
We are logging in as the &lt;strong&gt;project admin&lt;/strong&gt; user of the &lt;strong&gt;team-a namespace&lt;/strong&gt;. So everything we do in this namespace will be done as this user.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_grafana_admin_secret"&gt;Create the Grafana Admin Secret&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First, we need to create a secret containing the Grafana admin credentials, since we do not want to store passwords in plain text in our manifests!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Never store passwords in plain text in your manifests and use a secrets management solution like Sealed Secrets or External Secrets Operator.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create the following example. Replace the password with your own strong password.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: Secret
metadata:
name: grafana-admin-credentials
namespace: team-a
type: Opaque
stringData:
GF_SECURITY_ADMIN_USER: admin &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
GF_SECURITY_ADMIN_PASSWORD: &amp;lt;your-secure-password&amp;gt; &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The admin username for Grafana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Replace with a strong password&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_deploy_the_grafana_instance"&gt;Deploy the Grafana Instance&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Grafana Operator provides a Custom Resource Definition (CRD) for deploying Grafana instances. We can use this to deploy a Grafana instance in the &lt;strong&gt;team-a namespace&lt;/strong&gt;.
This instance is labelled with &amp;#34;dashboards: &amp;#34;grafana-team-a&amp;#34;&amp;#34; to make it easier to target it with GrafanaDashboard resources.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
name: grafana
namespace: team-a
labels:
dashboards: &amp;#34;grafana-team-a&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
config:
log:
mode: &amp;#34;console&amp;#34;
auth:
disable_login_form: &amp;#34;false&amp;#34;
security:
admin_user: ${GF_SECURITY_ADMIN_USER} &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
admin_password: ${GF_SECURITY_ADMIN_PASSWORD}
deployment:
spec:
replicas: 1
template:
spec:
containers:
- name: grafana
envFrom:
- secretRef:
name: grafana-admin-credentials &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
route:
spec:
tls:
termination: edge &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
disableDefaultAdminSecret: true &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Label used by GrafanaDashboard resources to target this instance. Required so that the datasource can find the instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;References environment variables from the secret&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Mounts the secret as environment variables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Creates an OpenShift Route with TLS edge termination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Disables the default admin secret created by the Grafana Operator. We will use our own secret and this setting prevents that the operator overwrites it.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;After a few moments, the Grafana pod should be ready.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n team-a -l app=grafana -w&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Get the Route URL:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get route grafana-route -n team-a -o jsonpath=&amp;#39;{.spec.host}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Use this route and the credentials from the secret to log into Grafana.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-login.png" alt="Grafana Login"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_configure_tempo_datasource"&gt;Configure Tempo Datasource&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The datasource connects Grafana to TempoStack (or any other datasource like Loki, Prometheus, etc.), allowing you to query and visualize traces. We need to authenticate with a client certificate to TempoStack and be sure the send the tenant ID of tenantA in the header of the queries.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
When you read the previous part of this series, you saw that we created a ClusterRole to grant read access to the traces. The Binding for this role allows &lt;strong&gt;read&lt;/strong&gt; access to everybody who is &lt;strong&gt;authenticated against the system&lt;/strong&gt;. Therefore, we do not need to take care of permissions…​ at this point.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_tempostack_client_certificate_authentication"&gt;TempoStack Client Certificate Authentication&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To query the TempoStack instance, we need to authenticate with it. This is done by using a client certificate. Therefore, we need to create a secret with the client certificate and key. In addition, this secret will also contain the tenant ID, which must be sent to the TempoStack instance as a header.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First, get the client certificate and key from the TempoStack instance:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get secret -n tempostack | grep gateway-mtls&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should find something like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;tempo-simplest-gateway-mtls kubernetes.io/tls 2 19d&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Fetch the client certificate and key and store them in a file locally:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get secret -n tempostack tempo-simplest-gateway-mtls -o jsonpath=&amp;#39;{.data.tls\.crt}&amp;#39; | base64 -d &amp;gt; tls.crt
oc get secret -n tempostack tempo-simplest-gateway-mtls -o jsonpath=&amp;#39;{.data.tls\.key}&amp;#39; | base64 -d &amp;gt; tls.key&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now, let’s get the tenant ID from the TempoStack instance:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get tempostack/simplest -n tempostack -o jsonpath=&amp;#39;{.spec.tenants.authentication}&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should find something like this (tenantA is the one we are interested in):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;[
{
&amp;#34;tenantId&amp;#34;: &amp;#34;1610b0c3-c509-4592-a256-a1871353dbfc&amp;#34;, &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
&amp;#34;tenantName&amp;#34;: &amp;#34;tenantA&amp;#34;
},
{
&amp;#34;tenantId&amp;#34;: &amp;#34;1610b0c3-c509-4592-a256-a1871353dbfd&amp;#34;,
&amp;#34;tenantName&amp;#34;: &amp;#34;tenantB&amp;#34;
}
]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The tenant ID of tenantA&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now, let’s create the secret with the client certificate and key and the tenant ID:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc create secret generic tempo-auth -n team-a --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key --from-literal=tenantA=&amp;#34;1610b0c3-c509-4592-a256-a1871353dbfc&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;That’s everything we need to authenticate with TempoStack for tenantA.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
As you prefer, you can create a separate secret for each tenantID. Just be sure to update the &amp;#34;valuesFrom&amp;#34; section of the GrafanaDatasource resource.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_tempo_datasource"&gt;Create the Tempo Datasource&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we can create the GrafanaDatasource that connects to TempoStack:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
name: tempo-tenanta-datasource
namespace: team-a
spec:
allowCrossNamespaceImport: false
datasource:
access: proxy
editable: true
isDefault: true
jsonData:
httpHeaderName1: X-Scope-OrgID &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
timeInterval: 5s
tlsAuth: true
tlsAuthWithCACert: false
tlsSkipVerify: true
name: tempo-tenanta &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
secureJsonData: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
httpHeaderValue1: &amp;#39;${tenantA}&amp;#39;
tlsClientCert: &amp;#39;${tls.crt}&amp;#39;
tlsClientKey: &amp;#39;${tls.key}&amp;#39;
type: tempo &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
url: &amp;#39;https://tempo-simplest-query-frontend.tempostack.svc.cluster.local:3200&amp;#39; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
instanceSelector:
matchLabels:
dashboards: grafana-team-a &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
resyncPeriod: 10m0s
valuesFrom: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
- targetPath: secureJsonData.tlsClientCert
valueFrom:
secretKeyRef:
key: tls.crt
name: tempo-auth
- targetPath: secureJsonData.tlsClientKey
valueFrom:
secretKeyRef:
key: tls.key
name: tempo-auth
- targetPath: secureJsonData.httpHeaderValue1
valueFrom:
secretKeyRef:
key: tenantA
name: tempo-auth&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The header name for the tenant ID.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the Tempo datasource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The reference to the client certificate, key and tenantID. They are coming from the secret we created earlier.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The type of the datasource. This type must be available in Grafana. For Tempo, the type is &lt;code&gt;tempo&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The URL of the Tempo query frontend. This is the URL of the Tempo query frontend service in the TempoStack namespace.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The label of the Grafana instance to target. This is the label we added to the Grafana instance when we created it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The reference to the secrets and the keys inside the secret.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_verify_the_datasource"&gt;Verify the Datasource&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Log into Grafana and navigate to &lt;strong&gt;Connection &amp;gt; Data sources&lt;/strong&gt;. You should see the Tempo datasource listed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-data-sources-list.png" alt="Grafana Datasource"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Click on the Tempo datasource to see the details:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-datasource.png" alt="Grafana Datasource Details"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As depicted in the image above, the datasource is configured with TLS Client Authentication and a HTTP Header.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Any changes in the Grafana UI will not be synced back to the GrafanaDatasource resource. You need to update the resource manually.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To verify the datasource, we can create a test query. Navigate to &lt;strong&gt;Explore&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Enter the query &lt;strong&gt;{}&lt;/strong&gt;, which will simply get all traces from the TempoStack instance from (by default) the last minute.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If you have multiple data sources configured already, be sure to select the correct one in the dropdown menu.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You should see the traces in the Grafana Explore UI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-explore-metrics.png" alt="Grafana Explore Metrics"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can select any trace and see the details in the Grafana Explore UI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-explore-trace.png" alt="Grafana Explore Trace"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_create_a_traces_dashboard"&gt;Create a Traces Dashboard&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now let’s create a dashboard to visualize and explore traces. The Grafana Operator allows us to define dashboards as Kubernetes resources.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_create_the_dashboard_configmap"&gt;Create the Dashboard ConfigMap&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To have the first showable data, we need to create a dashboard. The Grafana Operator provides a Custom Resource Definition called GrafanaDashboard for deploying such dashboards.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now, dashboards are their own beast. There are so many things to configure and set that probably a whole separate blog series is needed to cover all of it.
I am by far not an expert in this field and I am happy that I got this one up and running, so I will just create a simple dashboard with a few panels.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Honestly, I am not sure if it is a good way to create dashboards using the CRD. It is probably better to use the Grafana UI and then export the JSON data if you want to keep it declarative.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This dashboard provides:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A trace search panel&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Latency histogram -→ I have set this to &amp;#34;higher than 3ms&amp;#34; to at least see something.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Recent traces table&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: grafana-team-a-traces-dashboard
namespace: team-a
spec:
instanceSelector:
matchLabels:
dashboards: &amp;#34;grafana-team-a&amp;#34; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
json: | &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
{
&amp;#34;annotations&amp;#34;: {
&amp;#34;list&amp;#34;: [
{
&amp;#34;builtIn&amp;#34;: 1,
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;grafana&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;-- Grafana --&amp;#34;
},
&amp;#34;enable&amp;#34;: true,
&amp;#34;hide&amp;#34;: true,
&amp;#34;iconColor&amp;#34;: &amp;#34;rgba(0, 211, 255, 1)&amp;#34;,
&amp;#34;name&amp;#34;: &amp;#34;Annotations &amp;amp; Alerts&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;dashboard&amp;#34;
}
]
},
&amp;#34;editable&amp;#34;: true,
&amp;#34;fiscalYearStartMonth&amp;#34;: 0,
&amp;#34;graphTooltip&amp;#34;: 0,
&amp;#34;id&amp;#34;: 1,
&amp;#34;links&amp;#34;: [],
&amp;#34;panels&amp;#34;: [
{
&amp;#34;fieldConfig&amp;#34;: {
&amp;#34;defaults&amp;#34;: {},
&amp;#34;overrides&amp;#34;: []
},
&amp;#34;gridPos&amp;#34;: {
&amp;#34;h&amp;#34;: 4,
&amp;#34;w&amp;#34;: 24,
&amp;#34;x&amp;#34;: 0,
&amp;#34;y&amp;#34;: 0
},
&amp;#34;id&amp;#34;: 1,
&amp;#34;options&amp;#34;: {
&amp;#34;code&amp;#34;: {
&amp;#34;language&amp;#34;: &amp;#34;plaintext&amp;#34;,
&amp;#34;showLineNumbers&amp;#34;: false,
&amp;#34;showMiniMap&amp;#34;: false
},
&amp;#34;content&amp;#34;: &amp;#34;# Team-A Distributed Traces\n\nThis dashboard allows you to explore distributed traces from your applications.\n\n&amp;#34;,
&amp;#34;mode&amp;#34;: &amp;#34;markdown&amp;#34;
},
&amp;#34;pluginVersion&amp;#34;: &amp;#34;12.1.0&amp;#34;,
&amp;#34;title&amp;#34;: &amp;#34;Welcome&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;text&amp;#34;
},
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;fieldConfig&amp;#34;: {
&amp;#34;defaults&amp;#34;: {
&amp;#34;color&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;thresholds&amp;#34;
},
&amp;#34;custom&amp;#34;: {
&amp;#34;align&amp;#34;: &amp;#34;auto&amp;#34;,
&amp;#34;cellOptions&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;auto&amp;#34;
},
&amp;#34;inspect&amp;#34;: false
},
&amp;#34;mappings&amp;#34;: [],
&amp;#34;thresholds&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;absolute&amp;#34;,
&amp;#34;steps&amp;#34;: [
{
&amp;#34;color&amp;#34;: &amp;#34;green&amp;#34;,
&amp;#34;value&amp;#34;: 0
},
{
&amp;#34;color&amp;#34;: &amp;#34;red&amp;#34;,
&amp;#34;value&amp;#34;: 80
}
]
}
},
&amp;#34;overrides&amp;#34;: []
},
&amp;#34;gridPos&amp;#34;: {
&amp;#34;h&amp;#34;: 8,
&amp;#34;w&amp;#34;: 12,
&amp;#34;x&amp;#34;: 0,
&amp;#34;y&amp;#34;: 4
},
&amp;#34;id&amp;#34;: 2,
&amp;#34;options&amp;#34;: {
&amp;#34;cellHeight&amp;#34;: &amp;#34;sm&amp;#34;,
&amp;#34;footer&amp;#34;: {
&amp;#34;countRows&amp;#34;: false,
&amp;#34;fields&amp;#34;: &amp;#34;&amp;#34;,
&amp;#34;reducer&amp;#34;: [
&amp;#34;sum&amp;#34;
],
&amp;#34;show&amp;#34;: false
},
&amp;#34;showHeader&amp;#34;: true
},
&amp;#34;pluginVersion&amp;#34;: &amp;#34;12.1.0&amp;#34;,
&amp;#34;targets&amp;#34;: [
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;filters&amp;#34;: [
{
&amp;#34;id&amp;#34;: &amp;#34;81ea3f00&amp;#34;,
&amp;#34;operator&amp;#34;: &amp;#34;=&amp;#34;,
&amp;#34;scope&amp;#34;: &amp;#34;span&amp;#34;
}
],
&amp;#34;limit&amp;#34;: 20,
&amp;#34;metricsQueryType&amp;#34;: &amp;#34;range&amp;#34;,
&amp;#34;queryType&amp;#34;: &amp;#34;traceqlSearch&amp;#34;,
&amp;#34;refId&amp;#34;: &amp;#34;A&amp;#34;,
&amp;#34;tableType&amp;#34;: &amp;#34;traces&amp;#34;
}
],
&amp;#34;title&amp;#34;: &amp;#34;Trace Search&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;table&amp;#34;
},
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;fieldConfig&amp;#34;: {
&amp;#34;defaults&amp;#34;: {
&amp;#34;color&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;palette-classic&amp;#34;
},
&amp;#34;custom&amp;#34;: {
&amp;#34;axisBorderShow&amp;#34;: false,
&amp;#34;axisCenteredZero&amp;#34;: false,
&amp;#34;axisColorMode&amp;#34;: &amp;#34;text&amp;#34;,
&amp;#34;axisLabel&amp;#34;: &amp;#34;&amp;#34;,
&amp;#34;axisPlacement&amp;#34;: &amp;#34;auto&amp;#34;,
&amp;#34;barAlignment&amp;#34;: 0,
&amp;#34;barWidthFactor&amp;#34;: 0.6,
&amp;#34;drawStyle&amp;#34;: &amp;#34;bars&amp;#34;,
&amp;#34;fillOpacity&amp;#34;: 100,
&amp;#34;gradientMode&amp;#34;: &amp;#34;none&amp;#34;,
&amp;#34;hideFrom&amp;#34;: {
&amp;#34;legend&amp;#34;: false,
&amp;#34;tooltip&amp;#34;: false,
&amp;#34;viz&amp;#34;: false
},
&amp;#34;insertNulls&amp;#34;: false,
&amp;#34;lineInterpolation&amp;#34;: &amp;#34;linear&amp;#34;,
&amp;#34;lineWidth&amp;#34;: 1,
&amp;#34;pointSize&amp;#34;: 5,
&amp;#34;scaleDistribution&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;linear&amp;#34;
},
&amp;#34;showPoints&amp;#34;: &amp;#34;auto&amp;#34;,
&amp;#34;spanNulls&amp;#34;: false,
&amp;#34;stacking&amp;#34;: {
&amp;#34;group&amp;#34;: &amp;#34;A&amp;#34;,
&amp;#34;mode&amp;#34;: &amp;#34;none&amp;#34;
},
&amp;#34;thresholdsStyle&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;off&amp;#34;
}
},
&amp;#34;mappings&amp;#34;: [],
&amp;#34;thresholds&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;absolute&amp;#34;,
&amp;#34;steps&amp;#34;: [
{
&amp;#34;color&amp;#34;: &amp;#34;green&amp;#34;,
&amp;#34;value&amp;#34;: 0
},
{
&amp;#34;color&amp;#34;: &amp;#34;red&amp;#34;,
&amp;#34;value&amp;#34;: 80
}
]
},
&amp;#34;unit&amp;#34;: &amp;#34;ms&amp;#34;
},
&amp;#34;overrides&amp;#34;: []
},
&amp;#34;gridPos&amp;#34;: {
&amp;#34;h&amp;#34;: 8,
&amp;#34;w&amp;#34;: 12,
&amp;#34;x&amp;#34;: 12,
&amp;#34;y&amp;#34;: 4
},
&amp;#34;id&amp;#34;: 3,
&amp;#34;options&amp;#34;: {
&amp;#34;legend&amp;#34;: {
&amp;#34;calcs&amp;#34;: [],
&amp;#34;displayMode&amp;#34;: &amp;#34;list&amp;#34;,
&amp;#34;placement&amp;#34;: &amp;#34;bottom&amp;#34;,
&amp;#34;showLegend&amp;#34;: true
},
&amp;#34;tooltip&amp;#34;: {
&amp;#34;hideZeros&amp;#34;: false,
&amp;#34;mode&amp;#34;: &amp;#34;single&amp;#34;,
&amp;#34;sort&amp;#34;: &amp;#34;none&amp;#34;
}
},
&amp;#34;pluginVersion&amp;#34;: &amp;#34;12.1.0&amp;#34;,
&amp;#34;targets&amp;#34;: [
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;filters&amp;#34;: [
{
&amp;#34;id&amp;#34;: &amp;#34;6ff20d0d&amp;#34;,
&amp;#34;operator&amp;#34;: &amp;#34;=&amp;#34;,
&amp;#34;scope&amp;#34;: &amp;#34;span&amp;#34;
},
{
&amp;#34;id&amp;#34;: &amp;#34;min-duration&amp;#34;,
&amp;#34;operator&amp;#34;: &amp;#34;&amp;gt;&amp;#34;,
&amp;#34;tag&amp;#34;: &amp;#34;duration&amp;#34;,
&amp;#34;value&amp;#34;: &amp;#34;0.3ms&amp;#34;,
&amp;#34;valueType&amp;#34;: &amp;#34;duration&amp;#34;
}
],
&amp;#34;limit&amp;#34;: 20,
&amp;#34;metricsQueryType&amp;#34;: &amp;#34;range&amp;#34;,
&amp;#34;queryType&amp;#34;: &amp;#34;traceqlSearch&amp;#34;,
&amp;#34;refId&amp;#34;: &amp;#34;A&amp;#34;,
&amp;#34;tableType&amp;#34;: &amp;#34;spans&amp;#34;
}
],
&amp;#34;title&amp;#34;: &amp;#34;Span Duration Distribution&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;timeseries&amp;#34;
},
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;fieldConfig&amp;#34;: {
&amp;#34;defaults&amp;#34;: {
&amp;#34;color&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;thresholds&amp;#34;
},
&amp;#34;custom&amp;#34;: {
&amp;#34;align&amp;#34;: &amp;#34;auto&amp;#34;,
&amp;#34;cellOptions&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;auto&amp;#34;
},
&amp;#34;inspect&amp;#34;: false
},
&amp;#34;mappings&amp;#34;: [],
&amp;#34;thresholds&amp;#34;: {
&amp;#34;mode&amp;#34;: &amp;#34;absolute&amp;#34;,
&amp;#34;steps&amp;#34;: [
{
&amp;#34;color&amp;#34;: &amp;#34;green&amp;#34;,
&amp;#34;value&amp;#34;: 0
}
]
}
},
&amp;#34;overrides&amp;#34;: [
{
&amp;#34;matcher&amp;#34;: {
&amp;#34;id&amp;#34;: &amp;#34;byName&amp;#34;,
&amp;#34;options&amp;#34;: &amp;#34;traceID&amp;#34;
},
&amp;#34;properties&amp;#34;: [
{
&amp;#34;id&amp;#34;: &amp;#34;links&amp;#34;,
&amp;#34;value&amp;#34;: [
{
&amp;#34;title&amp;#34;: &amp;#34;View Trace&amp;#34;,
&amp;#34;url&amp;#34;: &amp;#34;/explore?orgId=1&amp;amp;left=%7B%22datasource%22:%22${datasource}%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22datasource%22:%7B%22type%22:%22tempo%22,%22uid%22:%22${datasource}%22%7D,%22queryType%22:%22traceql%22,%22limit%22:20,%22query%22:%22${__value.raw}%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D&amp;#34;
}
]
}
]
},
{
&amp;#34;matcher&amp;#34;: {
&amp;#34;id&amp;#34;: &amp;#34;byName&amp;#34;,
&amp;#34;options&amp;#34;: &amp;#34;duration&amp;#34;
},
&amp;#34;properties&amp;#34;: [
{
&amp;#34;id&amp;#34;: &amp;#34;unit&amp;#34;,
&amp;#34;value&amp;#34;: &amp;#34;ns&amp;#34;
}
]
}
]
},
&amp;#34;gridPos&amp;#34;: {
&amp;#34;h&amp;#34;: 10,
&amp;#34;w&amp;#34;: 24,
&amp;#34;x&amp;#34;: 0,
&amp;#34;y&amp;#34;: 12
},
&amp;#34;id&amp;#34;: 4,
&amp;#34;options&amp;#34;: {
&amp;#34;cellHeight&amp;#34;: &amp;#34;sm&amp;#34;,
&amp;#34;footer&amp;#34;: {
&amp;#34;countRows&amp;#34;: false,
&amp;#34;fields&amp;#34;: &amp;#34;&amp;#34;,
&amp;#34;reducer&amp;#34;: [
&amp;#34;sum&amp;#34;
],
&amp;#34;show&amp;#34;: false
},
&amp;#34;showHeader&amp;#34;: true,
&amp;#34;sortBy&amp;#34;: [
{
&amp;#34;desc&amp;#34;: true,
&amp;#34;displayName&amp;#34;: &amp;#34;startTime&amp;#34;
}
]
},
&amp;#34;pluginVersion&amp;#34;: &amp;#34;12.1.0&amp;#34;,
&amp;#34;targets&amp;#34;: [
{
&amp;#34;datasource&amp;#34;: {
&amp;#34;type&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;${datasource}&amp;#34;
},
&amp;#34;limit&amp;#34;: 50,
&amp;#34;queryType&amp;#34;: &amp;#34;traceqlSearch&amp;#34;,
&amp;#34;refId&amp;#34;: &amp;#34;A&amp;#34;,
&amp;#34;tableType&amp;#34;: &amp;#34;traces&amp;#34;
}
],
&amp;#34;title&amp;#34;: &amp;#34;Recent Traces&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;table&amp;#34;
}
],
&amp;#34;preload&amp;#34;: false,
&amp;#34;refresh&amp;#34;: &amp;#34;30s&amp;#34;,
&amp;#34;schemaVersion&amp;#34;: 41,
&amp;#34;tags&amp;#34;: [
&amp;#34;tracing&amp;#34;,
&amp;#34;tempo&amp;#34;,
&amp;#34;team-a&amp;#34;
],
&amp;#34;templating&amp;#34;: {
&amp;#34;list&amp;#34;: [
{
&amp;#34;current&amp;#34;: {
&amp;#34;text&amp;#34;: &amp;#34;tempo-tenanta&amp;#34;,
&amp;#34;value&amp;#34;: &amp;#34;90b71ee3-693a-4c41-8cdf-624a3bb78e7a&amp;#34;
},
&amp;#34;includeAll&amp;#34;: false,
&amp;#34;label&amp;#34;: &amp;#34;Datasource&amp;#34;,
&amp;#34;name&amp;#34;: &amp;#34;datasource&amp;#34;,
&amp;#34;options&amp;#34;: [],
&amp;#34;query&amp;#34;: &amp;#34;tempo&amp;#34;,
&amp;#34;refresh&amp;#34;: 1,
&amp;#34;regex&amp;#34;: &amp;#34;&amp;#34;,
&amp;#34;type&amp;#34;: &amp;#34;datasource&amp;#34;
}
]
},
&amp;#34;time&amp;#34;: {
&amp;#34;from&amp;#34;: &amp;#34;now-1h&amp;#34;,
&amp;#34;to&amp;#34;: &amp;#34;now&amp;#34;
},
&amp;#34;timepicker&amp;#34;: {},
&amp;#34;timezone&amp;#34;: &amp;#34;&amp;#34;,
&amp;#34;title&amp;#34;: &amp;#34;Team-A Distributed Traces&amp;#34;,
&amp;#34;uid&amp;#34;: &amp;#34;team-a-traces&amp;#34;,
&amp;#34;version&amp;#34;: 3
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The label of the Grafana instance to target. This is the label we added to the Grafana instance when we created it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The JSON data of the dashboard (yes, it’s a monster!).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now you will see a new dashboard in the Grafana UI. Navigate to &lt;strong&gt;Dashboards&lt;/strong&gt; and find &lt;strong&gt;Team-A Distributed Traces&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/grafana-dashboard.webp" alt="Grafana Dashboard"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_next_steps"&gt;Next Steps&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Congratulations! &lt;strong&gt;The Cat has summoned Grafana&lt;/strong&gt; .&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/cat-wizard-done-its-job.png?width=400px" alt="Cat Wizard"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You now have a functional Grafana instance with Tempo integration for team-a. Here are some ideas for extending this setup:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Loki datasource&lt;/strong&gt;: Correlate traces with logs using derived fields&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Prometheus datasource&lt;/strong&gt;: Link metrics to traces for full observability&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create alerting rules&lt;/strong&gt;: Set up alerts for high latency or error rates&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Build more dashboards&lt;/strong&gt;: Create service-specific dashboards for different applications&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As mentioned above, Grafana and its dashboards are powerful tools for visualizing and exploring traces. There are tons of things to configure and set up.
Maybe I will cover some of these things in a future article.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Happy tracing! 🚀&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Guide to OpenBao - Introduction - Part 1</title><link>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-11-openbao-part-1-introduction/</link><pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/2026-02-11-openbao-part-1-introduction/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;I finally had some time to dig into Secret Management. For my demo environments, SealedSecrets is usually enough to quickly test something. But if you want to deploy a real application with Secret Management, you need to think of a more permanent solution.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This article is the first of a series of articles about &lt;strong&gt;OpenBao&lt;/strong&gt;, a HashiCorp Vault fork. Today, we will explore what OpenBao is, why it was created, and when you should consider using it for your secret management needs. If you are familiar with HashiCorp Vault, you will find many similarities, but also some important differences that we will discuss.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In general, sensitive information in any system should be stored in a secure way. When it comes to OpenShift or Kubernetes, this is especially true, since the secrets are stored in the etcd database. Even if etcd is encrypted at rest, anybody can decode a given base64 string which is stored in the Secret.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Base64 is not an encryption format. It is an encoding format.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For example, the string &lt;code&gt;Thomas&lt;/code&gt; encoded as base64 is &lt;code&gt;VGhvbWFzCg==&lt;/code&gt;. This is simply masked plain text and it is not secure to share these values, especially not on Git. To make your CI/CD pipelines or GitOps process secure, you need to think of a secure way to manage your Secrets.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is where OpenBao comes in.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_openbao"&gt;What is OpenBao?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;a href="https://openbao.org" target="_blank" rel="noopener"&gt;OpenBao&lt;/a&gt; is an identity-based secrets and encryption management system. It is an open-source, community driven fork of HashiCorp Vault and provides secure storage, fine-grained access control, and lifecycle management for secrets such as API or SSH keys, passwords, certificates, and encryption keys.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The core OpenBao workflow consists of four stages:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt;: Verifying the identity of the client to determine if they are who they say they are. Once authenticated, a token is created and associated with a so-called policy.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Validation&lt;/strong&gt;: Checking if the client has the required permissions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Authorization&lt;/strong&gt;: Granting access based on policies that provide or deny access to certain paths and operations in OpenBao.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Access&lt;/strong&gt;: Limiting what the client can do with the secret&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;At its core, OpenBao:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Stores and encrypts sensitive information at rest and in transit&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provides fine-grained access controls (ACL)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Supports dynamic secret generation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Offers a comprehensive audit trail (must be enabled)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enables encryption as a service&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_the_history_why_openbao_exists"&gt;The History: Why OpenBao Exists&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao is a community-driven, open-source fork of HashiCorp Vault. But why was it created?&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In August 2023, HashiCorp announced a significant change to the licensing of their products, including Vault. They moved from the Mozilla Public License 2.0 (MPL 2.0) to the Business Source License 1.1 (BSL 1.1).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The BSL 1.1 license includes restrictions on competitive use:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Competitors could no longer use Vault’s code to offer competing services&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cloud providers and managed service providers faced restrictions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The open-source community lost the freedom to fork and commercialize&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_the_fork"&gt;The Fork&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In response to this license change, the Linux Foundation announced the OpenBao project in December 2023. OpenBao is:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A fork of HashiCorp Vault (from version 1.14.x)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Licensed under the OSI-approved Mozilla Public License 2.0&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Governed by a community-driven model&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Maintained independently of HashiCorp&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Continues to enable innovation without license restrictions&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_openbao_vs_hashicorp_vault_key_differences"&gt;OpenBao vs. HashiCorp Vault: Key Differences&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While OpenBao shares its heritage with Vault, there are several important differences:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_licensing"&gt;Licensing&lt;/h3&gt;
&lt;table class="tableblock frame-all grid-all stretch"&gt;
&lt;colgroup&gt;
&lt;col style="width: 20%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;col style="width: 40%;"/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Aspect&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;OpenBao&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;HashiCorp Vault&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;License&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;MPL 2.0 (OSI-approved open source)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;BSL 1.1 (with staged conversion)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Governance&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Community-driven (Linux Foundation)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;HashiCorp/IBM controlled&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Commercial restrictions&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;None&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Restrictions on competitive use&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Enterprise Features&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Community-driven additions&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Paid Enterprise tier&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Support&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Community support&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Paid support available&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Branding&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;bao CLI, OpenBao naming&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;vault CLI, Vault naming&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_technical_differences"&gt;Technical Differences&lt;/h3&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Token Format&lt;/strong&gt; - OpenBao uses shorter tokens in the format &lt;code&gt;sbr.&lt;a href="#random"&gt;[random]&lt;/a&gt;&lt;/code&gt;, while Vault uses longer tokens (&lt;code&gt;hvs.&lt;/code&gt;, &lt;code&gt;hvb.&lt;/code&gt;, &lt;code&gt;hvr.&lt;/code&gt; prefixes followed by long random strings). Old Vault tokens are still accepted according to their TTLs, but newly issued tokens follow the OpenBao format.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Plugin Ecosystem&lt;/strong&gt; - OpenBao comes with fewer built-in plugins by default, focusing on OSI-licensed integrations. Proprietary cloud vendor plugins have been moved to external repositories.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Storage Backend&lt;/strong&gt; - OpenBao has simplified its storage options, primarily supporting Raft as the recommended backend.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;API Compatibility&lt;/strong&gt; - OpenBao’s API is designed to be compatible with Vault, meaning existing clients and integrations should work without modification. However, some edge cases may require updates.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_core_concepts"&gt;Core Concepts&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before diving into installation and configuration in the next articles, it is essential to understand the fundamental concepts of OpenBao:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_secrets_engines"&gt;Secrets Engines&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Secrets engines are components that store, generate, or encrypt data. OpenBao supports multiple types:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;KV (Key-Value)&lt;/strong&gt;: Simple static secret storage with optional versioning&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PKI&lt;/strong&gt;: Certificate Authority for generating TLS certificates&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database&lt;/strong&gt;: Dynamic credential generation for databases&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Transit&lt;/strong&gt;: Encryption as a Service (EaaS)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SSH&lt;/strong&gt;: SSH key signing and OTP generation&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_authentication_methods"&gt;Authentication Methods&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Authentication methods verify client identity before granting access:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;: Uses Kubernetes service account tokens. Essential for Kubernetes/OpenShift deployments.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OIDC&lt;/strong&gt;: OpenID Connect for user authentication, like Keycloak, Okta, or Azure AD.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;LDAP&lt;/strong&gt;: Directory service authentication, like Active Directory, OpenLDAP, or Microsoft AD.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AppRole&lt;/strong&gt;: Machine-oriented authentication for applications. Ideal for CI/CD pipelines.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Token&lt;/strong&gt;: Direct token-based authentication. The root token is created during initialization.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_storage_backend"&gt;Storage Backend&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao encrypts all data before writing to storage. Supported backends include:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integrated Raft&lt;/strong&gt; (recommended): Built-in distributed storage&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Consul&lt;/strong&gt;: HashiCorp’s service discovery and KV store (less common with OpenBao)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File&lt;/strong&gt;: Single-node deployments only&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PostgreSQL/MySQL&lt;/strong&gt;: Database-backed storage&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_policies"&gt;Policies&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Policies define what a client can do after authentication. They use path-based access control (deny-by-default mode):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-hcl hljs" data-lang="hcl"&gt;path &amp;#34;secret/data/myapp/*&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;, &amp;#34;list&amp;#34;]
}
path &amp;#34;pki/issue/my-role&amp;#34; {
capabilities = [&amp;#34;create&amp;#34;, &amp;#34;update&amp;#34;]
}
path &amp;#34;database/creds/myapp-role&amp;#34; {
capabilities = [&amp;#34;read&amp;#34;]
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_tokens"&gt;Tokens&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Tokens are the primary authentication credential in OpenBao. After successful authentication, clients receive a token that:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Has a TTL (Time To Live)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Is associated with one or more policies&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can be renewed (if renewable)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can create child tokens&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_sealunseal"&gt;Seal/Unseal&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao starts in a &lt;strong&gt;sealed&lt;/strong&gt; state where it cannot access encrypted data. The unseal process requires multiple key shares (using Shamir’s Secret Sharing) or auto-unseal mechanisms to decrypt the master key.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_leases"&gt;Leases&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Most secrets in OpenBao have an associated lease - a duration after which the secret expires. This enables:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Automatic secret rotation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Revocation of compromised credentials&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Audit trail of secret usage&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_use_cases"&gt;Use Cases&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_dynamic_database_credentials"&gt;Dynamic Database Credentials&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Instead of storing static database passwords:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Application authenticates to OpenBao&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Requests database credentials&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao creates a temporary database user&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Returns credentials with a short TTL&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Credentials automatically expire&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Benefits: No long-lived credentials, automatic rotation, per-application isolation.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_certificate_management_with_pki"&gt;Certificate Management with PKI&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Instead of manually managing TLS certificates:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Configure OpenBao as an intermediate CA&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Applications request certificates on demand&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Short-lived certificates (hours/days instead of years)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Automatic renewal before expiration&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Benefits: No certificate sprawl, automated rotation, reduced attack surface.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_encryption_as_a_service"&gt;Encryption as a Service&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Instead of implementing encryption in each application:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Applications send plaintext to OpenBao’s Transit engine&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenBao encrypts with managed keys&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Applications store ciphertext&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Decryption requests go through OpenBao&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Benefits: Centralized key management, separation of duties, key rotation without re-encryption.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_kubernetes_secret_injection"&gt;Kubernetes Secret Injection&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Instead of storing secrets in Kubernetes Secrets:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;Deploy OpenBao with Kubernetes auth&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure injector or External Secrets Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Pods automatically receive secrets at startup&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Secrets never stored in etcd&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Benefits: Secrets not in cluster, dynamic injection, centralized management.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_when_to_use_openbao"&gt;When to Use OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao is an excellent choice when you need:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Centralized Secret Management&lt;/strong&gt; - If you have secrets scattered across configuration files, environment variables, and various secret stores, OpenBao provides a single source of truth.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic Secrets&lt;/strong&gt; - For use cases where you need short-lived, automatically rotated credentials (e.g., database passwords), OpenBao can generate them on-demand.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encryption as a Service&lt;/strong&gt; - If applications need encryption capabilities without managing encryption keys, OpenBao’s Transit engine provides this functionality.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Certificate Management&lt;/strong&gt; - OpenBao’s PKI engine can act as a Certificate Authority, issuing and managing TLS certificates.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Compliance Requirements&lt;/strong&gt; - For environments with strict audit requirements, OpenBao provides comprehensive audit logging of all secret access.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_when_not_to_use_openbao"&gt;When NOT to Use OpenBao&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao might be overkill for:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simple, Static Secrets&lt;/strong&gt; - If you only have a few static secrets that rarely change, simpler solutions like Sealed Secrets or External Secrets Operator with a basic backend might suffice.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Small Teams with Limited Resources&lt;/strong&gt; - OpenBao requires operational expertise to maintain. If you do not have the resources to operate it properly, consider managed alternatives.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Single-Application Deployments&lt;/strong&gt; - If you have a single application with minimal secret requirements, the complexity of OpenBao may not be justified.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_architecture_overview"&gt;Architecture Overview&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A typical OpenBao deployment consists of:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/security/secrets-management/openbao/images/part1_openbao_architecture.png" alt="Architecture Overview"/&gt;
&lt;/div&gt;
&lt;div class="title"&gt;Figure 1. Architecture Overview&lt;/div&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multiple Nodes&lt;/strong&gt;: For high availability (HA), OpenBao runs as a cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Raft Consensus&lt;/strong&gt;: Leader election and data replication&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Persistent Storage&lt;/strong&gt;: Encrypted data stored on persistent volumes&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Load Balancer&lt;/strong&gt;: Distributes client requests&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenBao provides a powerful, open-source solution for secret management that addresses the challenges of modern cloud-native environments. Its fork from HashiCorp Vault means it benefits from years of development while remaining truly open source.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the next article, we will start with a standalone installation to understand the fundamentals before moving to Kubernetes deployments.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_resources"&gt;Resources&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org" target="_blank" rel="noopener"&gt;OpenBao Official Website&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://openbao.org/docs" target="_blank" rel="noopener"&gt;OpenBao Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/openbao/openbao" target="_blank" rel="noopener"&gt;OpenBao GitHub Repository&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.linuxfoundation.org/press/linux-foundation-launches-openbao" target="_blank" rel="noopener"&gt;Linux Foundation OpenBao Announcement&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability Introduction - Part 1</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-23-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part1/</link><pubDate>Sun, 23 Nov 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-23-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part1/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;With this article I would like to summarize and, especially, remember my setup. This is Part 1 of a series of articles that I split up so it is easier to read and understand and not too long. Initially, there will be 6 parts, but I will add more as needed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_introduction"&gt;Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In modern microservices architectures, understanding how requests flow through your distributed system is crucial for debugging, performance optimization, and maintaining system health. &lt;strong&gt;Distributed tracing&lt;/strong&gt; provides visibility into these complex interactions by tracking requests as they traverse multiple services.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This whole guide demonstrates how to set up a distributed tracing infrastructure using&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenShift&lt;/strong&gt; (4.16+) as base platform&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Red Hat Build of OpenTelemetry&lt;/strong&gt; - The observability framework based on &lt;a href="https://opentelemetry.io/" target="_blank" rel="noopener"&gt;OpenTelemetry&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TempoStack&lt;/strong&gt; - &lt;a href="https://grafana.com/docs/tempo/latest/" target="_blank" rel="noopener"&gt;Grafana’s distributed&lt;/a&gt; tracing backend for Kubernetes&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-tenant architecture&lt;/strong&gt; - Isolating traces by team or environment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cluster Observability Operator&lt;/strong&gt; - For now this Operator is only used to extend the OpenShift UI with the tracing UI.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_thanks_to"&gt;Thanks to&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This article would not have been possible without the help of &lt;a href="https://www.linkedin.com/in/michaela-lang-900603b9/" target="_blank" rel="noopener"&gt;Michaela Lang&lt;/a&gt;. Check out her articles on LinkedIn mainly discussing Tracing and Service Mesh.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_opentelemetry"&gt;What is OpenTelemetry?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;a href="https://opentelemetry.io/" target="_blank" rel="noopener"&gt;OpenTelemetry&lt;/a&gt; is an observability framework and toolkit which aims to provide unified, standardized, and vendor-neutral telemetry data collection for traces, metrics and logs for cloud-native software.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
In this article we will focus on traces only.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;When it comes to Red Hat and OpenShift, the supported installation is based on the Operator &lt;strong&gt;Red Hat Build of OpenTelemetry&lt;/strong&gt;, which is based on the open source OpenTelemetry project and adds supportability.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_core_features"&gt;Core Features:&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-product-overview_otel-architecture" target="_blank" rel="noopener"&gt;OpenShift OTEL Product Overview&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;OpenTelemetry Collector&lt;/strong&gt; can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The core features of the OpenTelemetry Collector include:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Collection and Processing Hub&lt;/strong&gt;
It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customizable telemetry data pipeline&lt;/strong&gt;
The OpenTelemetry Collector is customizable and supports various processors, exporters, and receivers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Auto-instrumentation features&lt;/strong&gt;
Automatic instrumentation simplifies the process of adding observability to applications. If used, developers do not need to manually instrument their code for basic telemetry data. (This depends a bit on the used coding language and framework, maybe this is worth a separate article)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Here are some of the use cases for the OpenTelemetry Collector:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Centralized data collection&lt;/strong&gt;
In a microservices architecture, the Collector can be deployed to aggregate data from multiple services.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data enrichment and processing&lt;/strong&gt;
Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-backend receiving and exporting&lt;/strong&gt;
The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.
You can use Red Hat build of OpenTelemetry in combination with Red Hat OpenShift Distributed Tracing Platform.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_is_grafana_tempo"&gt;What is Grafana Tempo?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;a href="https://grafana.com/oss/tempo/" target="_blank" rel="noopener"&gt;Grafana Tempo&lt;/a&gt; is an open-source, easy-to-use, and high-scale distributed tracing backend. Tempo lets you search for traces, generate metrics from spans, and link your tracing data with logs and metrics.
It is deeply integrated with Grafana, Prometheus and Loki and can ingest traces from various sources, such as OpenTelemetry, Jaeger, Zipkin and more.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_core_features_2"&gt;Core Features:&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://grafana.com/oss/tempo/" target="_blank" rel="noopener"&gt;Grafana Tempo&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Built for massive scale&lt;/strong&gt;
The only dependency is object storage which provides affordable long-term storage of traces.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cost-effective&lt;/strong&gt;
Not indexing the traces makes it possible to store orders of magnitude more trace data for the same cost.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Strong integration with open source tools&lt;/strong&gt;
Compatible with open source tracing protocols.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In addition, it is deeply integrated with &lt;strong&gt;Grafana&lt;/strong&gt;, allowing you to visualize the traces in a Grafana dashboard and link logs, metrics and traces together.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_use_case_for_this_article"&gt;Use Case for this Article&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The use case that was tested in this article was the following:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Several applications (team-a, team-b, …​) are hosted on separate OpenShift namespaces.
On each application namespace, a &lt;strong&gt;local&lt;/strong&gt; OpenTelemetry Collector (OTC) is configured to collect the traces from the application. These local OpenTelemetry Collectors will export the traces to a &lt;strong&gt;central&lt;/strong&gt; OpenTelemetry Collector (hosted in the namespace &lt;strong&gt;tempostack&lt;/strong&gt;).
The central Collector will then export the data to a TempoStack instance (also hosted in the namespace &lt;strong&gt;tempostack&lt;/strong&gt;), which will store the traces in object storage. The storage itself is provided by S3-compatible storage, in this example OpenShift Data Foundation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For a more detailed view see the next section.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_architecture_overview"&gt;Architecture Overview&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As described the implementation follows a &lt;strong&gt;two-tier collector architecture&lt;/strong&gt; with multi-tenancy support:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-mermaid {align=" center"="" zoom="true" }="" hljs"="" data-lang="mermaid {align=" center"="" zoom="true" }"=""&gt;---
title: &amp;#34;Architecture Overview&amp;#34;
config:
theme: &amp;#39;dark&amp;#39;
---
graph TB
subgraph app[&amp;#34;Application&amp;#34;]
mockbin1[&amp;#34;Mockbin #1&amp;lt;br/&amp;gt;(team-a namespace)&amp;lt;br/&amp;gt;(tenantA)&amp;#34;]
mockbin2[&amp;#34;Mockbin #2&amp;lt;br/&amp;gt;(team-b namespace)&amp;lt;br/&amp;gt;(tenantB)&amp;#34;]
end
subgraph local[&amp;#34;Local OTC&amp;#34;]
otc_a[&amp;#34;OTC-team-a&amp;lt;br/&amp;gt;• Add namespace&amp;lt;br/&amp;gt;• Batch processing&amp;lt;br/&amp;gt;• Forward to central&amp;#34;]
otc_b[&amp;#34;OTC-team-b&amp;lt;br/&amp;gt;• Add namespace&amp;lt;br/&amp;gt;• Batch processing&amp;lt;br/&amp;gt;• Forward to central&amp;#34;]
end
subgraph central[&amp;#34;Central OTC (tempostack namespace)&amp;#34;]
otc_central[&amp;#34;OTC-central&amp;lt;br/&amp;gt;• Receive from local collectors&amp;lt;br/&amp;gt;• Add K8s metadata (k8sattributes)&amp;lt;br/&amp;gt;• Route by namespace (routing connector)&amp;lt;br/&amp;gt;• Authenticate with bearer token&amp;lt;br/&amp;gt;• Forward to TempoStack with tenant ID&amp;#34;]
end
subgraph tempo[&amp;#34;TempoStack (tempostack namespace)&amp;#34;]
tempostack[&amp;#34;Multi-tenant Trace Storage&amp;lt;br/&amp;gt;• tenantA, tenantB, ...&amp;lt;br/&amp;gt;• S3 backend storage&amp;lt;br/&amp;gt;• 48-hour retention&amp;#34;]
end
mockbin1 --&amp;gt;|&amp;#34;OTLP&amp;#34;| otc_a
mockbin2 --&amp;gt;|&amp;#34;OTLP&amp;#34;| otc_b
otc_a --&amp;gt;|&amp;#34;OTLP(with namespace)&amp;#34;| otc_central
otc_b --&amp;gt;|&amp;#34;OTLP(with namespace)&amp;#34;| otc_central
otc_central --&amp;gt;|&amp;#34;OTLP&amp;lt;br/&amp;gt;(X-Scope-OrgID header)&amp;#34;| tempostack
classDef appStyle fill:#2f652a,stroke:#2f652a,stroke-width:2px
classDef localStyle fill:#425cc6,stroke:#425cc6,stroke-width:2px;
classDef centralStyle fill:#425cc6,stroke:#425cc6,stroke-width:2px;
classDef tempoStyle fill:#906403,stroke:#906403,stroke-width:2px
class mockbin1,mockbin2 appStyle
class otc_a,otc_b localStyle
class otc_central centralStyle
class tempostack tempoStyle&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;As quick summary&lt;/strong&gt;
Traces from the application &lt;strong&gt;Mockbin #1&lt;/strong&gt; are collected by the &amp;#34;OTC-team-a&amp;#34; and forwarded to the &amp;#34;Central OTC&amp;#34;. From there the traces are further forwarded to Tempo.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_why_2_tier_architecture"&gt;Why 2-Tier Architecture?&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You may ask yourself why there are two OpenTelemetry Collectors and if the application could not send directly to the Central OTC or if the Local OTC could not write directly into the Tempo storage. Both options are fine and will work, however, I tried to make it more secure. Only one OTC is allowed to perform write actions and applications can only send to the Local OTC, which forwards to the Central OTC where the traces are routed based on the source namespace. This way, nobody can interfere with other namespaces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Therefore:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Separation of Concerns&lt;/strong&gt;: Application namespaces handle local processing; central namespace handles routing and storage. The Central decides where and how to store. Application owners cannot overwrite this.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: Lightweight collectors in app namespaces, heavy processing centralized&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Applications don’t need direct access to TempoStack&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Each tier can scale independently&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-tenancy&lt;/strong&gt;: Central collector routes traces to appropriate tenants&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_what_now"&gt;What now?&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The next articles will cover the actual implementation. We will first deploy Tempo and the Central Collector. Then we will deploy example applications and the Local Collector.
If everything works as planned, we will be able to see traces on the OpenShift UI.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability - Grafana Tempo - Part 2</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/</link><pubDate>Mon, 24 Nov 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;After covering the fundamentals and architecture in Part 1, it’s time to get our hands dirty! This article walks through the complete implementation of a distributed tracing infrastructure on OpenShift.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We’ll deploy and configure the &lt;strong&gt;Tempo Operator&lt;/strong&gt; and a multi-tenant &lt;strong&gt;TempoStack&lt;/strong&gt; instance. For S3 storage we will use the integrated OpenShift Data Foundation. However, you can use whatever S3-compatible storage you have available.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_grafana_tempo_step_by_step_implementation"&gt;Grafana Tempo - Step-by-Step Implementation&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before starting, ensure you have the following Systems or Operators installed (used Operator versions in this article):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenShift or Kubernetes cluster (OpenShift v4.20)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Red Hat build of OpenTelemetry installed (v0.135.0-1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Tempo Operator installed (v0.18.0-2)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;S3-compatible storage (for TempoStack, based on OpenShift Data Foundation)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cluster Observability Operator (v1.3.0) - for now this Operator is only used to extend the OpenShift UI with the tracing UI.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
For all configurations I also created a proper GitOps implementation (of course :)). However, first I would like to show the actual configuration. The GitOps implementation can be found at the section &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/#_gitops_deployment"&gt;GitOps Deployment&lt;/a&gt;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_verifydeploy_tempo_operator"&gt;Step 1: Verify/Deploy Tempo Operator&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s first verify if the Tempo Operator is installed and ready to use. If everything is fine, then the Operator will be deployed in the namespace &lt;strong&gt;openshift-tempo-operator&lt;/strong&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/tempo-operator-installation.png?width=640px" alt="Tempo Operator"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_deploy_tempostack_resource"&gt;Step 2: Deploy TempoStack Resource&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;TempoStack&lt;/strong&gt; is the central trace storage backend. We are using the Tempo Operator here, which provides the TempoStack resource and multi-tenancy capability. During my tests I deployed the TempoStack resource and also created a &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/tempo-tracing" target="_blank" rel="noopener"&gt;Helm Chart&lt;/a&gt; that is able to render this resource.
However, the Operator itself also provides a TempoMonolith resource. This will put everything into one Pod, while the TempoStack rolls out the stack in separate containers (like ingester, gateway, etc.).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
My Helm Chart currently does not support the TempoMonolith resource yet. If you require this, please ping me and I will try to add it.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Be sure that the S3 bucket is available and a Secret (here called tempo-s3) with the following keys exists (valid for OpenShift Data Foundation): &lt;strong&gt;access_key_id, access_key_secret, bucket, endpoint&lt;/strong&gt;. The layout of the Secret will look slightly different depending on the S3 storage backend you are using.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Create the TempoStack instance:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following TempoStack resource has been used&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: simplest &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: tempostack &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
spec:
managementState: Managed
replicationFactor: 1 &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
# Resource limits
resources: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
total:
limits:
cpu: &amp;#34;2&amp;#34;
memory: 2Gi
# Trace retention
retention: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
global:
traces: 48h0m0s
# S3 storage configuration
storage:
secret:
credentialMode: static &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
name: tempo-s3 &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
type: s3
tls:
enabled: false
storageSize: 500Gi
# Multi-tenancy configuration
tenants:
mode: openshift
authentication: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
- tenantId: 1610b0c3-c509-4592-a256-a1871353dbfa
tenantName: tenantA
- tenantId: 1610b0c3-c509-4592-a256-a1871353dbfb
tenantName: tenantB
- tenantId: 1610b0c3-c509-4592-a256-a1871353dbfc
tenantName: tenantC
# Gateway and UI
template: &lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;(9)&lt;/b&gt;
gateway:
enabled: true
component:
replicas: 1
ingress:
type: route
route:
termination: reencrypt
queryFrontend:
component:
replicas: 1
jaegerQuery:
enabled: true
servicesQueryDuration: 72h0m0s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the TempoStack instance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the TempoStack instance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Integer value for the number of ingesters that must acknowledge the data from the distributors before accepting a span.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Defines resources for the TempoStack instance. Default is (limit only) 2 CPU and 2Gi memory.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Configuration options for retention of traces. The default value is 48h.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Credential mode for the S3 storage. Depends how the storage will be integrated. Default is static.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Secret name for the S3 storage.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Configuration options for the tenants. In this example: tenantA, tenantB, tenantC. Consists of tenantName and tenantId, both can be defined by the user.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;9&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Configuration options for the different Tempo components. In this example: gateway, query-frontend. Other components could be: distributor, ingester, compactor or querier.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Key Configuration Points:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-tenancy&lt;/strong&gt;: Supports 3 tenants currently (tenantA, tenantB, tenantC). This part must be modified when a new tenant enters the realm.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Retention&lt;/strong&gt;: Traces stored for 48 hours.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Storage&lt;/strong&gt;: Uses S3-compatible backend (requires separate secret).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Gateway&lt;/strong&gt;: Exposes OTLP endpoint with TLS.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_configure_rbac_for_tempostack_trace_access_readwrite"&gt;Step 3: Configure RBAC for TempoStack Trace Access read/write&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Set up ClusterRoles to control who can read and write traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The ClusterRoles must be updated whenever a new tenant is configured in TempoStack. The name of the tenant must be added in the &lt;strong&gt;resources&lt;/strong&gt; array.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Traces Reader Role:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tempostack-traces-reader &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
rules:
- verbs: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
- get
apiGroups:
- tempo.grafana.com
resources:
- tenantA &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
- tenantB
- tenantC
resourceNames:
- traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the ClusterRole&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Verbs for the ClusterRole.
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;apiGroups: tempo.grafana.com&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;resources:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;List of Tenants&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;resourceNames:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;traces&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;verbs:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;get&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;List of tenants that are allowed to read the traces.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Bind Reader Role to Authenticated Users:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tempostack-traces-reader
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: &amp;#39;system:authenticated&amp;#39; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tempostack-traces-reader&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The Group that is allowed to read the traces. In this example: &lt;strong&gt;system:authenticated&lt;/strong&gt;. This means ALL authenticated users will be able to read all the traces (see warning below).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
With this ClusterRoleBinding, anybody who is authenticated (system:authenticated) will be able to see the traces for the defined tenants (A, B, C). This is for an easy showcase in this article. For production environments, you should implement more granular RBAC controls per tenant.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Traces Writer Role:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This ClusterRole is used to write traces into TempoStack. Typically you will use this ClusterRole for the OpenTelemetry Collector, so it can write into TempoStack.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tempostack-traces-write &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
rules:
- verbs:
- create &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
apiGroups:
- tempo.grafana.com
resources: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
- tenantB
- tenantA
- tenantC
resourceNames:
- traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the ClusterRoleBinding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;This time the verb is &lt;strong&gt;create&lt;/strong&gt;. This means the user will be able to write new traces into TempoStack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;List of tenants.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Bind Writer Role to Central Collector:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tempostack-traces
subjects:
- kind: ServiceAccount
name: otel-collector &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: tempostack &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tempostack-traces-write&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount that is allowed to write the traces. In this example: &lt;strong&gt;otel-collector&lt;/strong&gt;. We will create this Service Account in the next article.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace of the ServiceAccount. In this example: &lt;strong&gt;tempostack&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_gitops_deployment"&gt;GitOps Deployment&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While the above is good for quick tests, it always makes sense to have a proper GitOps deployment. I have created a Chart and GitOps configuration that will:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploy the Tempo Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the TempoStack instance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the RBAC configurations for the TempoStack instance&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following sources will be used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://charts.stderr.at/" target="_blank" rel="noopener"&gt;Helm Repository&lt;/a&gt; - to fetch the Helm Chart for the Tempo Operator including required Sub-Charts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/tree/main/clusters/management-cluster/setup-tempo-operator" target="_blank" rel="noopener"&gt;Setup Tempo Operator&lt;/a&gt; - To deploy and configure the Tempo Operator, configure object storage, magically create a Secret with the required keys, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Feel free to clone or use whatever you need.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following Sub-Charts are used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-objectstore" target="_blank" rel="noopener"&gt;helper-objectstore&lt;/a&gt; (version ~1.0.0) - Creates S3 Bucket&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-odf-bucket-secret" target="_blank" rel="noopener"&gt;helper-odf-bucket-secret&lt;/a&gt; (version ~1.0.0) - Creates the Secret usable by the TempoStack instance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-operator" target="_blank" rel="noopener"&gt;helper-operator&lt;/a&gt; (version ~1.0.18) - Installs the Tempo Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-status-checker" target="_blank" rel="noopener"&gt;helper-status-checker&lt;/a&gt; (version ~4.0.0) - Verifies the status of the Tempo Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/tempo-tracing" target="_blank" rel="noopener"&gt;tempo-tracing&lt;/a&gt; (version ~1.0.0) - Installs the TempoStack instance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/tpl" target="_blank" rel="noopener"&gt;tpl&lt;/a&gt; (version ~1.0.0) - Template Library&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The this Argo CD Application we can deploy the Tempo Operator:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: setup-tempo-operator
namespace: openshift-gitops
spec:
destination:
name: in-cluster &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openshift-tempo-operator &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
info:
- name: Description
value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
project: in-cluster &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
source:
path: clusters/management-cluster/setup-tempo-operator &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
targetRevision: main
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Target cluster, here the local cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the target cluster, here the Operator will be installed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Project of the target cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;URL to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Since many things are moving in the background, it will take a while until the Argo CD Application is synced (i.e. Creation of S3 Bucket)
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the Argo CD Application that can be synchronized with the cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/tempo-argocd.png?width=640px" alt="Tempo Deployment via Argo CD"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As seen above, many resources are created using a single Helm Chart (ok, with some Sub-Charts). The actual configuration is done in the values.yaml file. The full values file is quite long, so I will break it down to the different Sub-Charts and add the complete file &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/#_complete_values_file"&gt;at the end&lt;/a&gt; that latest file I am using for testing can be found at &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/blob/main/clusters/management-cluster/setup-tempo-operator/values.yaml" target="_blank" rel="noopener"&gt;values.yaml&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_values_file_snippets_for_gitops"&gt;Values File Snippets for GitOps&lt;/h3&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_tempstack_settings"&gt;TempStack Settings&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following will configure the TempoStack resource. Verify the upstream Helm Chart &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/tempo-tracing" target="_blank" rel="noopener"&gt;tempo-tracing&lt;/a&gt; for detailed and additional information about the settings.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;tempo-tracing:
tempostack: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
enabled: true
name: simplest
managementState: Managed
namespace: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
create: true
name: tempostack
descr: &amp;#34;Namespace for the TempoStack&amp;#34;
display: &amp;#34;TempoStack&amp;#34;
additionalAnnotations: {}
additionalLabels: {}
storage: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
secret:
name: tempo-s3
type: s3
credentialMode: static
storageSize: 500Gi
replicationFactor: 1
serviceAccount: tempo-simplest &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
tenants: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
mode: openshift
enabled: true
authentication: &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
- tenantName: &amp;#39;tenantA&amp;#39;
tenantId: &amp;#39;1610b0c3-c509-4592-a256-a1871353dbfc&amp;#39;
permissions: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
- write
- read
- tenantName: &amp;#39;tenantB&amp;#39;
tenantId: &amp;#39;1610b0c3-c509-4592-a256-a1871353dbfd&amp;#39;
permissions:
- write
- read
observability:
enabled: true
tracing:
jaeger_agent_endpoint: &amp;#39;localhost:6831&amp;#39;
otlp_http_endpoint: &amp;#39;http://localhost:4320&amp;#39;
template:
gateway:
enabled: true
rbac: false
ingress:
type: &amp;#39;route&amp;#39;
termination: &amp;#39;reencrypt&amp;#39;
component:
replicas: 1
queryFrontend:
jaegerQuery:
enabled: true
component:
replicas: 1&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Basic settings, like name of the instance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace for the TempoStack instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Storage configuration for the TempoStack instance. Here we use type s3 and the Secret called &amp;#34;tempo-s3&amp;#34; which will be generated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;ServiceAccount for the TempoStack instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Tenant configuration for the TempoStack instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;List of tenants. This list must be extended when new tenants shall be configured.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;RBAC permissions for this tenant. This will add the tenant as &lt;strong&gt;resource&lt;/strong&gt; to the READ and/or WRITE ClusterRole&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_tempo_operator_deployment"&gt;Tempo Operator Deployment&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following settings will deploy the Operator and verify the status of the Operator installation. Only when the Operator has been installed successfully will Argo CD continue with the synchronization.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;helper-operator:
operators:
tempo-operator: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
enabled: true
namespace: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
name: openshift-tempo-operator
create: true
subscription: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
channel: stable
approval: Automatic
operatorName: tempo-product
source: redhat-operators
sourceNamespace: openshift-marketplace
operatorgroup:
create: true
notownnamespace: true
########################################
# SUBCHART: helper-status-checker
# Verify the status of a given operator.
########################################
helper-status-checker:
enabled: true
approver: false &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
checks:
- operatorName: tempo-operator &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
namespace:
name: openshift-tempo-operator
syncwave: 1
serviceAccount:
name: &amp;#34;status-checker-tempo&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Install the Operator&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace settings of the Operator&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Settings of the Operator itself, like name, channel and approval strategy.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Approver settings for the status checker. Here disabled, because we are using automatic approval.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Operator name to be verified.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_s3_bucket_deployment"&gt;S3 Bucket Deployment&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following Sub-Chart can be used to automatically create a Bucket in OpenShift Data Foundation.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;########################################
# SUBCHART: helper-objectstore
# A helper chart that simply creates another backingstore for logging.
# This is a chart in a very early state, and not everything can be customized for now.
# It will create the objects:
# - BackingStore
# - BackingClass
# - StorageClass
# NOTE: Currently only PV type is supported
########################################
helper-objectstore:
# -- Enable objectstore configuration
# @default -- false
enabled: true
# -- Syncwave for Argo CD
# @default - 1
syncwave: 1
# -- Name of the BackingStore
backingstore_name: tempo-backingstore
# -- Size of the BackingStore that each volume shall have.
backingstore_size: 400Gi &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
# -- CPU Limit for the Noobaa Pod
# @default -- 500m
limits_cpu: 500m
# -- Memory Limit for the Noobaa Pod.
# @default -- 2Gi
limits_memory: 2Gi
pvPool:
# -- Number of volumes that shall be used
# @default -- 1
numOfVolumes: 1
# Type of BackingStore. Currently pv-pool is the only one supported by this Helm Chart.
# @default -- pv-pool
type: pv-pool
# -- The StorageClass the BackingStore is based on
baseStorageClass: gp3-csi &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
# -- Name of the StorageClass that shall be created for the bucket.
storageclass_name: tempo-bucket-storage-class &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
# Bucket that shall be created
bucket:
# -- Shall a new bucket be enabled?
# @default -- false
enabled: true
# -- Name of the bucket that shall be created
name: tempo-bucket &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
# -- Target Namespace for that bucket.
namespace: tempostack &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
# -- Syncwave for bucketclaim creation. This should be done very early, but it depends on ODF.
# @default -- 2
syncwave: 2
# -- Name of the storageclass for our bucket
# @default -- openshift-storage.noobaa.io
storageclass: tempo-bucket-storage-class &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Size of the bucket.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;StorageClass the bucket is based on.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the StorageClass that shall be created for the bucket.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the bucket that shall be created.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace for the bucket.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;StorageClass for the bucket.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_automatic_secret_creation_for_tempostack"&gt;Automatic Secret Creation for TempoStack&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;TempoStack is expecting a Secret with the following keys:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;access_key_id&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;access_key_secret&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;bucket&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;endpoint&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;region (only for specific settings)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;OpenShift Data Foundation creates a secret with access_key_id and access_key_secret and a ConfigMap with endpoint, region and the name of the bucket.
Unfortunately, this does not work for TempoStack. Therefore, we are using a &lt;strong&gt;helper-odf-bucket-secret&lt;/strong&gt; chart that will create a new Secret with the required keys.
This chart creates a Job that reads the required information and creates a new Secret with the required keys.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;##############################################
# SUBCHART: helper-odf-bucket-secret
# Creates a Secret that Tempo requires
#
# A Kubernetes Job is created, that reads the
# data from the Secret and ConfigMap and
# creates a new secret for Tempo.
##############################################
helper-odf-bucket-secret:
# -- Enable Job to create a Secret for TempoStack.
# @default -- false
enabled: true
# -- Syncwave for Argo CD.
# @default -- 3
syncwave: 3
# -- Namespace where TempoStack is deployed and where the Secret shall be created.
namespace: tempostack
# -- Name of Secret that shall be created.
secretname: tempo-s3 &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
# Bucket Configuration
bucket:
# -- Name of the Bucket shall has been created.
name: tempo-bucket &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
# -- Keys that shall be used to create the Secret.
keys: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
# -- Overwrite access_key_id key.
# @default -- access_key_id
access_key_id: access_key_id
# -- Overwrite access_key_secret key.
# @default -- access_key_secret
access_key_secret: access_key_secret
# -- Overwrite bucket key.
# @default -- bucket
bucket: bucket
# -- Overwrite endpoint key.
# @default -- endpoint
endpoint: endpoint
# -- Overwrite region key. Region is only set if set_region is true.
# @default -- region
region: region
# -- Set region key.
# @default -- false
set_region: false &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the Secret that shall be created.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the bucket that shall be used.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Keys that shall be used to create the Secret.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Set region key. Here disabled, because we are not using a specific region.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock caution"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-caution" title="Caution"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Using OpenShift Data Foundation with TempoStack requires you to NOT set a region. Therefore, it is disabled above.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_complete_values_file"&gt;Complete values file&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To see the whole file expand the code:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;div class="expand"&gt;
&lt;div class="expand-label" style="cursor: pointer;" onclick="$h = $(this);$h.next('div').slideToggle(100,function () {$h.children('i').attr('class',function () {return $h.next('div').is(':visible') ? 'fas fa-chevron-down' : 'fas fa-chevron-right';});});"&gt;
&lt;i style="font-size:x-small;" class="fas fa-chevron-right"&gt;&lt;/i&gt;
&lt;span&gt;
&lt;a&gt;Expand me...&lt;/a&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div class="expand-content" style="display: none;"&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;tempo: &amp;amp;channel-tempo stable
tempo-namespace: &amp;amp;tempo-namespace openshift-tempo-operator
bucketname: &amp;amp;bucketname tempo-bucket
tempo-secret: &amp;amp;tempo-secret-name tempo-s3
storageclassname: &amp;amp;storageclassname tempo-bucket-storage-class
tempostack-namespace: &amp;amp;tempostack-namespace tempostack
tempo-tracing:
tempostack:
enabled: true
name: simplest
managementState: Managed
namespace:
create: true
name: *tempostack-namespace
descr: &amp;#34;Namespace for the TempoStack&amp;#34;
display: &amp;#34;TempoStack&amp;#34;
additionalAnnotations: {}
additionalLabels: {}
storage:
secret:
name: tempo-s3
type: s3
credentialMode: static
storageSize: 500Gi
replicationFactor: 1
serviceAccount: tempo-simplest
tenants:
mode: openshift
enabled: true
authentication:
- tenantName: &amp;#39;tenantA&amp;#39;
tenantId: &amp;#39;1610b0c3-c509-4592-a256-a1871353dbfc&amp;#39;
permissions:
- write
- read
- tenantName: &amp;#39;tenantB&amp;#39;
tenantId: &amp;#39;1610b0c3-c509-4592-a256-a1871353dbfd&amp;#39;
permissions:
- write
- read
observability:
enabled: true
tracing:
jaeger_agent_endpoint: &amp;#39;localhost:6831&amp;#39;
otlp_http_endpoint: &amp;#39;http://localhost:4320&amp;#39;
template:
gateway:
enabled: true
rbac: false
ingress:
type: &amp;#39;route&amp;#39;
termination: &amp;#39;reencrypt&amp;#39;
component:
replicas: 1
queryFrontend:
jaegerQuery:
enabled: true
component:
replicas: 1
######################################
# SUBCHART: helper-operator
# Operators that shall be installed.
######################################
helper-operator:
operators:
tempo-operator:
enabled: true
namespace:
name: *tempo-namespace
create: true
subscription:
channel: *channel-tempo
approval: Automatic
operatorName: tempo-product
source: redhat-operators
sourceNamespace: openshift-marketplace
operatorgroup:
create: true
notownnamespace: true
########################################
# SUBCHART: helper-status-checker
# Verify the status of a given operator.
########################################
helper-status-checker:
enabled: true
approver: false
checks:
- operatorName: tempo-operator
namespace:
name: *tempo-namespace
syncwave: 1
serviceAccount:
name: &amp;#34;status-checker-tempo&amp;#34;
########################################
# SUBCHART: helper-objectstore
# A helper chart that simply creates another backingstore for logging.
# This is a chart in a very early state, and not everything can be customized for now.
# It will create the objects:
# - BackingStore
# - BackingClass
# - StorageClass
# NOTE: Currently only PV type is supported
########################################
helper-objectstore:
# -- Enable objectstore configuration
# @default -- false
enabled: true
# -- Syncwave for Argo CD
# @default - 1
syncwave: 1
# -- Name of the BackingStore
backingstore_name: tempo-backingstore
# -- Size of the BackingStore that each volume shall have.
backingstore_size: 400Gi
# -- CPU Limit for the Noobaa Pod
# @default -- 500m
limits_cpu: 500m
# -- Memory Limit for the Noobaa Pod.
# @default -- 2Gi
limits_memory: 2Gi
pvPool:
# -- Number of volumes that shall be used
# @default -- 1
numOfVolumes: 1
# Type of BackingStore. Currently pv-pool is the only one supported by this Helm Chart.
# @default -- pv-pool
type: pv-pool
# -- The StorageClass the BackingStore is based on
baseStorageClass: gp3-csi
# -- Name of the StorageClass that shall be created for the bucket.
storageclass_name: *storageclassname
# Bucket that shall be created
bucket:
# -- Shall a new bucket be enabled?
# @default -- false
enabled: true
# -- Name of the bucket that shall be created
name: *bucketname
# -- Target Namespace for that bucket.
namespace: *tempo-namespace
# -- Syncwave for bucketclaim creation. This should be done very early, but it depends on ODF.
# @default -- 2
syncwave: 2
# -- Name of the storageclass for our bucket
# @default -- openshift-storage.noobaa.io
storageclass: *storageclassname
##############################################
# SUBCHART: helper-odf-bucket-secret
# Creates a Secret that Tempo requires
#
# A Kubernetes Job is created, that reads the
# data from the Secret and ConfigMap and
# creates a new secret for Tempo.
##############################################
helper-odf-bucket-secret:
# -- Enable Job to create a Secret for TempoStack.
# @default -- false
enabled: true
# -- Syncwave for Argo CD.
# @default -- 3
syncwave: 3
# -- Namespace where TempoStack is deployed and where the Secret shall be created.
namespace: *tempostack-namespace
# -- Name of Secret that shall be created.
secretname: *tempo-secret-name
# Bucket Configuration
bucket:
# -- Name of the Bucket shall has been created.
name: *bucketname
# -- Keys that shall be used to create the Secret.
keys:
# -- Overwrite access_key_id key.
# @default -- access_key_id
access_key_id: access_key_id
# -- Overwrite access_key_secret key.
# @default -- access_key_secret
access_key_secret: access_key_secret
# -- Overwrite bucket key.
# @default -- bucket
bucket: bucket
# -- Overwrite endpoint key.
# @default -- endpoint
endpoint: endpoint
# -- Overwrite region key. Region is only set if set_region is true.
# @default -- region
region: region
# -- Set region key.
# @default -- false
set_region: false&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_the_tempostack"&gt;The TempoStack&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s imagine all of the above works and we have a TempoStack instance running in the namespace &lt;strong&gt;tempostack&lt;/strong&gt; with the name &lt;strong&gt;simplest&lt;/strong&gt;. Several Pods are running, like the distributor, ingester, gateway, query-frontend, compactor and querier.
I admit it is not highly available, but this can be easily changed in the values file above (replica count).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n tempostack | grep simplest
tempo-simplest-compactor-584689c78f-t7pxb 1/1 Running 0 3d2h
tempo-simplest-distributor-6fb5d7dc9d-wrzt4 1/1 Running 0 3d2h
tempo-simplest-gateway-bbcb774b9-p44lq 2/2 Running 0 11h
tempo-simplest-ingester-0 1/1 Running 0 3d2h
tempo-simplest-querier-6cf9d7b6d8-mvc9d 1/1 Running 0 3d2h
tempo-simplest-query-frontend-7d859f9f9f-xzj97 3/3 Running 0 3d2h&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we can continue with the OpenTelemetry Collector deployment.
But before we do this, let’s first discuss how to:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Add Tracing UI to OpenShift&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_extend_the_openshift_ui"&gt;Extend the OpenShift UI&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While we have Tempo installed, it will not be visible in the OpenShift UI by default.
Here a separate Operator has been created by Red Hat, that will take care of this extension: &lt;strong&gt;Cluster Observability Operator&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This Operator be installed:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/coo.png?width=640px" alt="Cluster Observability Operator"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The Operator can be deployed with the Chart &lt;strong&gt;helper-operator&lt;/strong&gt; as well. However, I did not merge this together with the TempoStack deployment, because this Operator also serves different purposes.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We only require a very small bit of this Operator. Amongst other resources, it also provides a resource called &lt;strong&gt;UIPlugin&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The resource must be configured as follows:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: observability.openshift.io/v1alpha1
kind: UIPlugincd
metadata:
name: distributed-tracing &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
spec:
type: DistributedTracing &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name MUST be &lt;strong&gt;distributed-tracing&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The type MUST be &lt;strong&gt;DistributedTracing&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will extend the OpenShift UI with a new navigation link &lt;strong&gt;&amp;#34;Observe&amp;#34; &amp;gt; &amp;#34;Traces&amp;#34;&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/tempo-UI.png" alt="Tempo UI"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_on_the_next_episode"&gt;On the next Episode&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The next article will cover the deployment of the Central OpenTelemetry Collector. We will configure the RBAC permissions required for the Central Collector to enrich traces with Kubernetes metadata and deploy the Central OpenTelemetry Collector with its complete configuration.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability - Central Collector - Part 3</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-25-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part3/</link><pubDate>Tue, 25 Nov 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-25-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part3/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;With the architecture defined in &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-23-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part1/"&gt;Part 1&lt;/a&gt; and TempoStack deployed in &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;Part 2&lt;/a&gt;, it’s time to tackle the heart of our distributed tracing system: the &lt;strong&gt;Central OpenTelemetry Collector&lt;/strong&gt;. This is the critical component that sits between your application namespaces and TempoStack, orchestrating trace flow, metadata enrichment, and tenant routing.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this article, we’ll configure the RBAC permissions required for the Central Collector to enrich traces with Kubernetes metadata and deploy the Central OpenTelemetry Collector with its complete configuration. You’ll learn how to set up receivers for accepting traces from local collectors, configure processors to enrich traces with Kubernetes and OpenShift metadata, and implement routing connectors to direct traces to the appropriate TempoStack tenants based on namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To be honest, this was the most challenging part of the entire setup to get right, as you can easily miss a setting, or misconfigure a part of the configuration. But once you understand the configuration, it becomes straightforward to extend and modify.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_central_opentelemetry_collector_step_by_step_implementation"&gt;Central OpenTelemetry Collector - Step-by-Step Implementation&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before starting, ensure you have completed the previous parts and have:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Part 1 completed&lt;/strong&gt;: &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-23-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part1/"&gt;Hitchhiker’s Guide to Observability with OpenTelemetry and Tempo&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Part 2 completed&lt;/strong&gt;: &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;Deploy Grafana Tempo and TempoStack&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenShift cluster&lt;/strong&gt; (v4.20) with Red Hat build of OpenTelemetry Operator installed (v0.135.0-1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Access&lt;/strong&gt; Cluster-admin access&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
For all configurations I also created a proper GitOps implementation (of course :)). However, first I would like to show the actual configuration. The GitOps implementation can be found at the end of this article.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_verifydeploy_red_hat_build_of_opentelemetry_operator"&gt;Step 1: Verify/Deploy Red Hat Build of OpenTelemetry Operator&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Verify if the Operator &lt;strong&gt;Red Hat Build of OpenTelemetry&lt;/strong&gt; is installed and ready. The Operator itself is deployed in the namespace &lt;strong&gt;openshift-opentelemetry-operator&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/otel-operator.png" alt="OpenTelemetry Operator"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_configure_rbac_for_central_collector"&gt;Step 2: Configure RBAC for Central Collector&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For the OpenTelemetry Collector to function properly with processors like k8sattributes and resourcedetection, it requires cluster-wide read access to Kubernetes resources.
Depending on the configuration of the central collector, you might need to configure different RBAC settings to allow the collector to perform specific things. The central collector in our example uses the &lt;strong&gt;k8sattributes processor&lt;/strong&gt; and &lt;strong&gt;resourcedetection processor&lt;/strong&gt; to enrich traces with Kubernetes and OpenShift metadata. These processors require read access to cluster resources.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_why_these_permissions_are_needed"&gt;Why These Permissions Are Needed&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The k8sattributes processor enriches telemetry data by querying the Kubernetes API to add metadata such as:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pod information&lt;/strong&gt;: Pod name, UID, start time&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Namespace details&lt;/strong&gt;: Namespace name and labels&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deployment context&lt;/strong&gt;: ReplicaSet and Deployment names&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Node information&lt;/strong&gt;: Node name where the pod is running&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The resourcedetection processor detects the OpenShift environment by querying:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Infrastructure resources&lt;/strong&gt;: Cluster name, platform type, region (from &lt;code&gt;config.openshift.io/infrastructures&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Without these permissions, traces would lack critical context needed for debugging and analysis.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Always check the latest &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-collector-receivers_otel-configuration-of-otel-collector" target="_blank" rel="noopener"&gt;documentation&lt;/a&gt; of the appropriate component to get the most up to date information. I prefer to use separate ClusterRoles for each processor to keep the permissions as granular as possible. However, that causes some overhead, so you might want to combine the permissions into a single ClusterRole.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_clusterrole_for_kubernetes_attributes"&gt;Create ClusterRole for Kubernetes Attributes&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following ClusterRole has been created:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8sattributes-otel-collector
rules:
# Permissions for k8sattributes processor
# Allows the collector to read pod, namespace, and replicaset information
# to enrich traces with Kubernetes metadata
- verbs:
- get
- watch
- list
apiGroups:
- &amp;#39;*&amp;#39;
resources:
- pods
- namespaces
- replicasets
# Permissions for resourcedetection processor (OpenShift)
# Allows detection of OpenShift cluster information
# such as cluster name, platform type, and region
- verbs:
- get
- watch
- list
apiGroups:
- config.openshift.io
resources:
- infrastructures
- infrastructures/status&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Key Points:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read-Only Access&lt;/strong&gt;: The collector only needs &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;watch&lt;/code&gt;, and &lt;code&gt;list&lt;/code&gt; verbs (no write permissions)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cluster-Wide Scope&lt;/strong&gt;: ClusterRole grants permissions across all namespaces, necessary for monitoring multi-tenant environments&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Essential Resources&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;pods&lt;/code&gt;: Source of trace context (which pod generated the span)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;namespaces&lt;/code&gt;: Namespace metadata and labels for routing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;replicasets&lt;/code&gt;: Determine the owning Deployment for better trace attribution&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenShift Infrastructure&lt;/strong&gt;: Access to &lt;code&gt;config.openshift.io/infrastructures&lt;/code&gt; allows detection of cluster-level properties&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_clusterrolebinding"&gt;Create ClusterRoleBinding&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Our OpenTelemetry Collector will use the ServiceAccount &lt;strong&gt;otel-collector&lt;/strong&gt; (in the namespace tempostack) to read the Kubernetes resources. Thus, we need to create a ClusterRoleBinding to grant the necessary permissions to the ServiceAccount.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8sattributes-collector-tempo
subjects:
- kind: ServiceAccount
name: otel-collector &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: tempostack &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8sattributes-otel-collector &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount that is allowed to write the traces. In this example: &lt;strong&gt;otel-collector&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The namespace of the ServiceAccount. In this example: &lt;strong&gt;tempostack&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The reference to the ClusterRole&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_security_note"&gt;Security Note&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The above permissions in the ClusterRole follow the &lt;strong&gt;principle of least privilege&lt;/strong&gt;:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Only read operations are granted (no create, update, or delete)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access is limited to specific resource types needed for metadata enrichment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ServiceAccount &lt;strong&gt;otel-collector&lt;/strong&gt; is dedicated to the collector and not shared with other applications&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_create_serviceaccount_for_central_opentelemetry_collector"&gt;Step 3: Create ServiceAccount for Central OpenTelemetry Collector&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The ServiceAccount used by the OpenTelemetry Collector. This is the service account that will be used to authenticate to the TempoStack instance. Thus, the Bindings created earlier to write into TempoStack and to read from Kubernetes will be required.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
In this article we are installing the central Collector into the same namespace as the TempoStack instance. However, you might want to install it in a different namespace to keep the namespaces separated. Keep an eye on possible Network Policies that might be required to allow the communication between the namespaces.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector
namespace: tempostack&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_4_deploy_central_collector"&gt;Step 4: Deploy Central Collector&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The central collector receives traces from local collectors, enriches them with Kubernetes metadata, and routes them to appropriate TempoStack tenants.
For the sake of simplicity, I have taken snippets from the whole Configuration manifest. At the end of this section you will find the &lt;a href="#_complete_opentelemetry_collector_manifest"&gt;whole manifest&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_basic_configuration"&gt;Basic Configuration&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The very basic settings of the OpenTelemetry Collector, are the amount of replicas, the ServiceAccount to use and the deployment mode.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Collector can be deployed in one of the following modes:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt; (default) - Creates a Deployment with the given numbers of replicas.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;StatefulSet&lt;/strong&gt; - Creates a StatefulSet with the given numbers of replicas. Useful for stateful workloads, for example when using the Collector’s File Storage Extension or Tail Sampling Processor.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DaemonSet&lt;/strong&gt; - Creates a DaemonSet with the given numbers of replicas. Useful for scraping telemetry data from every node, for example by using the Collector’s Filelog Receiver to read container logs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sidecar&lt;/strong&gt; - Injects the Collector as a sidecar into the pod. Useful for accessing log files inside a container, for example by using the Collector’s Filelog Receiver and a shared volume such as emptyDir.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the examples in this series of articles we will use the modes: &lt;strong&gt;deployment&lt;/strong&gt; and &lt;strong&gt;sidecar&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: tempostack
spec:
mode: deployment &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
replicas: 1 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
serviceAccount: otel-collector &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
[...]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The deployment mode to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The number of replicas to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The ServiceAccount to use, created in the previous step.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_receivers"&gt;Receivers&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Receivers are the components that &lt;strong&gt;receive&lt;/strong&gt; the traces from the local collectors. Receivers accept data in a specified format and translate it into the internal format. In our example we want to receive the traces from the local collectors. For our tests we are using the &lt;strong&gt;oltp&lt;/strong&gt; Receiver, which is collecting traces, metrics and logs by using the OpenTelemetry Protocol (OTLP).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The easiest configuration is:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; receivers:
otlp: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
protocols:
grpc:
endpoint: 0.0.0.0:4317 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
http:
endpoint: 0.0.0.0:4318 &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The name of the receiver.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The gRPC endpoint to receive.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The http endpoint to receive.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Besides the otlp receiver, there are other receivers available. For example:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Jaeger&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Object Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubelet Stats Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Prometheus Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Filelog Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Journald Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Events Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Cluster Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenCensus Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Zipkin Receiver&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kafka Receiver.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Please check the &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-collector-receivers_otel-configuration-of-otel-collector" target="_blank" rel="noopener"&gt;OpenShift OTEL Receivers&lt;/a&gt; documentation for more details.&lt;/p&gt;
&lt;/div&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_processors"&gt;Processors&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Processors are the components that &lt;strong&gt;process&lt;/strong&gt; the data after it is received and before it is exported. Processors are completely optional, but they are useful to transform, enrich, or filter traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The order of processors matters.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The example configuration is using the following processors:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;k8sattributes&lt;/strong&gt;: Can add Kubernetes metadata to the traces.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;resourcedetection&lt;/strong&gt;: Can detect OpenShift/K8s environment info.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;memory_limiter&lt;/strong&gt;: Periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;batch&lt;/strong&gt;: Batches the traces for efficiency. This is a very important processor to improve the performance of the Collector.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Additional processors are available. Please check the &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-collector-processors" target="_blank" rel="noopener"&gt;OpenShift OTEL Processors&lt;/a&gt; documentation for more details.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Some processors will requires additional ClusterRole configuration.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; # Processors - enrich and batch traces
processors:
# Add Kubernetes metadata
k8sattributes: {} &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
# Detect OpenShift/K8s environment info
resourcedetection: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
detectors:
- openshift
timeout: 2s
# Memory protection
memory_limiter: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
# Batch for efficiency
batch: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
send_batch_size: 10000
timeout: 10s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The k8sattributes processor to add Kubernetes metadata to the traces.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The resourcedetection processor to detect OpenShift/K8s environment info.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The memory_limiter processor to protect the Collector’s memory usage.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The batch processor to batch the traces for efficiency.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_connectors"&gt;Connectors&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;A Connector joins two pipelines together. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Several Connectors are available, for example:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Count Connector&lt;/strong&gt;: Counts traces spans, trace span events, metrics, metric data points, and log records.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Routing Connector&lt;/strong&gt;: Routes the traces to different pipelines.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Forward Connector&lt;/strong&gt;: Merges two pipelines of the same type.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Spanmetrics Connector&lt;/strong&gt;: Aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-collector-connectors" target="_blank" rel="noopener"&gt;OpenShift OTEL Connectors&lt;/a&gt; documentation lists all available Connectors and their configuration options.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In our example we are using the &lt;strong&gt;routing&lt;/strong&gt; Connector, which is able to route the traces to different pipelines based on the namespace.
This helps to route the traces to the correct tenant, without the need to configure this in the local OpenTelemetry Collector. (In other words, the project cannot change this setting, because it is configured in the central OpenTelemetry Collector.)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this example traces from the namespace &amp;#34;team-a&amp;#34; will be routed to the pipeline &amp;#34;tenantA&amp;#34;, traces from the namespace &amp;#34;team-b&amp;#34; will be routed to the pipeline &amp;#34;tenantB&amp;#34;, and so on.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; # Connectors - route traces to different pipelines
connectors:
routing/traces:
default_pipelines: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- traces/Default
error_mode: ignore &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
table:
# Route team-a namespace to tenantA
- statement: route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-a&amp;#34; &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
pipelines:
- traces/tenantA
# Route team-b namespace to tenantB
- statement: route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-b&amp;#34;
pipelines:
- traces/tenantB
# Route team-c namespace to tenantC
- statement: route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-c&amp;#34;
pipelines:
- traces/tenantC&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Destination pipelines for routing the telemetry data for which no routing condition is satisfied.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Error-handling mode&lt;/strong&gt;: Defines how the connector handles routing errors:
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;propagate&lt;/code&gt;: Logs an error and drops the payload (stops processing)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ignore&lt;/code&gt;: Logs the error but continues attempting to match subsequent routing rules&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;silent&lt;/code&gt;: Same as &lt;code&gt;ignore&lt;/code&gt; but without logging&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Default: &lt;code&gt;propagate&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Route traces from namespace &lt;strong&gt;team-a&lt;/strong&gt; to the pipeline &lt;strong&gt;tenantA&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_exporters"&gt;Exporters&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Exporters are the components that &lt;strong&gt;export&lt;/strong&gt; the traces to a destination. Exporters accept data in a specified format and translate it into the destination format. In our example we want to export the traces to the TempoStack instance. The X-Scope-OrgID header is used to identify the tenant and is sent to the TempoStack instance.
The authentication is done by using the ServiceAccount token.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Many different Exporters are available. The &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/red_hat_build_of_opentelemetry/index#otel-collector-exporters" target="_blank" rel="noopener"&gt;OpenShift OTEL Exporters&lt;/a&gt; documentation lists all available Exporters and their configuration options.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;For our tests we are using the &lt;strong&gt;otlp&lt;/strong&gt; Exporter, which will export using the OpenTelemetry Protocol (OTLP):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; exporters:
# Tenant A exporter
otlp/tenantA: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
endpoint: tempo-simplest-gateway:8090 &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantA &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Protocol/name of the exporter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The endpoint to export to. Here, the endpoint is the address of the TempoStack instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The scope org ID to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Whether to skip certificate verification. I did not bother with certificates in this example.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The server name override to use. This was just for testing purposes and can be omitted.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_extensions"&gt;Extensions&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Extensions extend the Collector capabilities. In our example we are using the &lt;strong&gt;bearertokenauth&lt;/strong&gt; Extension, which is used to authenticate to the TempoStack instance.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; # Extensions - authentication
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/token&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_service"&gt;Service:&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Components are enabled by adding them into a &lt;strong&gt;Pipeline&lt;/strong&gt;. If a component is not configured in a pipeline, it is not enabled.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this example we are using the following pipelines (snippets):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;traces/in&lt;/strong&gt;: The incoming traces pipeline.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;traces/tenantA&lt;/strong&gt;: The tenant A pipeline.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;traces/in&lt;/strong&gt; pipeline is the incoming traces pipeline. It is used to receive the traces from the local collectors. It leverages the &lt;strong&gt;otlp&lt;/strong&gt; Receiver to receive the traces. It uses the &lt;strong&gt;resourcedetection&lt;/strong&gt;, &lt;strong&gt;k8sattributes&lt;/strong&gt;, &lt;strong&gt;memory_limiter&lt;/strong&gt;, and &lt;strong&gt;batch&lt;/strong&gt; processors to process the traces. And finally, it uses the &lt;strong&gt;routing/traces&lt;/strong&gt; Connector to route the traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;routing/traces&lt;/strong&gt; Connector is used to route the traces to the correct tenant based on the namespace.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The &lt;strong&gt;traces/tenantA&lt;/strong&gt; Pipeline will receive the traces from &lt;strong&gt;routing/traces&lt;/strong&gt; and export the traces to &lt;strong&gt;otlp/tenantA&lt;/strong&gt; which then sends everything to TempoStack to store the traces using the header &lt;strong&gt;X-Scope-OrgID: tenantA&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; # Service pipelines
service:
extensions:
- bearertokenauth
pipelines:
# Incoming traces pipeline
traces/in: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
receivers: &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
- otlp
processors: &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
- resourcedetection
- k8sattributes
- memory_limiter
- batch
exporters: &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
- routing/traces
# Tenant A pipeline
traces/tenantA: &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
receivers: &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
- routing/traces
exporters: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
- otlp/tenantA&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The incoming traces pipeline. It takes the traces from the otlp receiver.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The receivers to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The processors to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The exporters to use. It is exporting the data to the routing connector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The tenant A pipeline.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The receivers to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The exporters to use. It is exporting to the exporter, that will send the data to TempoStack.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The service automatically creates Kubernetes Services for the receivers. The Central Collector will be accessible at &lt;code&gt;otel-collector.tempostack.svc.cluster.local:4317&lt;/code&gt; (gRPC) and &lt;code&gt;:4318&lt;/code&gt; (HTTP) for local collectors to send traces.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_data_flow_visualization"&gt;Data Flow Visualization&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To better understand how traces flow through the Central Collector, the following Mermaid diagram visualizes the complete journey from application to storage using &lt;strong&gt;team-a&lt;/strong&gt; as an example:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-mermaid {align=" center"="" zoom="true" }="" hljs"="" data-lang="mermaid {align=" center"="" zoom="true" }"=""&gt;---
title: &amp;#34;Central Collector Data Flow (Wrapped Layout)&amp;#34;
config:
theme: &amp;#39;dark&amp;#39;
---
flowchart LR
%% --------------------------
%% Column 1: Local Collector
%% --------------------------
subgraph local[&amp;#34;(team-a namespace)&amp;#34;]
app[&amp;#34;Application&amp;lt;br/&amp;gt;(team-a)&amp;#34;]
end
%% --------------------------
%% Column 2: Central Collector
%% --------------------------
subgraph central[&amp;#34;Central Collector (tempostack namespace)&amp;#34;]
%% We force this specific column to stack vertically (Top-Bottom)
direction TB
%% --- ROW 1: Receiver + Processors ---
subgraph row1[&amp;#34; &amp;#34;]
direction LR
receiver[&amp;#34;OTLP Receiver&amp;lt;br/&amp;gt;Port: 4317/4318&amp;#34;]
subgraph pipeline_in[&amp;#34;traces/in Processors&amp;#34;]
proc1[&amp;#34;resourcedetection&amp;lt;br/&amp;gt;(Detect OpenShift)&amp;#34;]
proc2[&amp;#34;k8sattributes&amp;lt;br/&amp;gt;(Add K8s metadata)&amp;#34;]
proc3[&amp;#34;memory_limiter&amp;lt;br/&amp;gt;(Protect memory)&amp;#34;]
proc4[&amp;#34;batch&amp;lt;br/&amp;gt;(Batch traces)&amp;#34;]
end
end
%% --- ROW 2: Connector + Tenant Pipeline ---
subgraph row2[&amp;#34; &amp;#34;]
direction LR
exporter_routing[&amp;#34;Exporter: to connector&amp;#34;]
connector[&amp;#34;Export to routing/traces Connector&amp;lt;br/&amp;gt;(Route by namespace)&amp;#34;]
subgraph pipeline_tenant[&amp;#34;Pipeline: traces/tenantA&amp;#34;]
receiver_tenant[&amp;#34;Receiver:&amp;lt;br/&amp;gt;routing/traces&amp;#34;]
exporter_tenant[&amp;#34;Exporter:&amp;lt;br/&amp;gt;otlp/tenantA&amp;#34;]
end
end
end
%% --------------------------
%% Column 3: TempoStack
%% --------------------------
subgraph tempo[&amp;#34;TempoStack&amp;#34;]
gateway[&amp;#34;Tempo Gateway&amp;lt;br/&amp;gt;(X-Scope-OrgID: tenantA)&amp;#34;]
storage[&amp;#34;S3 Storage&amp;lt;br/&amp;gt;(tenantA traces)&amp;#34;]
end
%% --------------------------
%% Connections
%% --------------------------
app --&amp;gt;|&amp;#34;OTLP traces&amp;#34;| receiver
receiver --&amp;gt; proc1 --&amp;gt; proc2 --&amp;gt; proc3 --&amp;gt; proc4
%% The Wrap: Connect end of Row 1 to start of Row 2
proc4 --&amp;gt; exporter_routing
exporter_routing --&amp;gt; connector
connector --&amp;gt;|&amp;#34;Route by namespace=team-a&amp;lt;br/&amp;gt;→ tenantA&amp;#34;| receiver_tenant
receiver_tenant --&amp;gt; exporter_tenant
exporter_tenant --&amp;gt;|&amp;#34;OTLP + bearer token&amp;lt;br/&amp;gt;Header: X-Scope-OrgID&amp;#34;| gateway
gateway --&amp;gt; storage
%% --------------------------
%% Styles
%% --------------------------
classDef appStyle fill:#2f652a,stroke:#2f652a,stroke-width:2px
classDef receiverStyle fill:#425cc6,stroke:#425cc6,stroke-width:2px
classDef processorStyle fill:#4a90e2,stroke:#4a90e2,stroke-width:2px
classDef connectorStyle fill:#906403,stroke:#906403,stroke-width:2px
classDef exporterStyle fill:#7b4397,stroke:#7b4397,stroke-width:2px
classDef tempoStyle fill:#d35400,stroke:#d35400,stroke-width:2px
class app appStyle
class receiver,receiver_tenant receiverStyle
class proc1,proc2,proc3,proc4 processorStyle
class connector connectorStyle
class exporter_tenant exporterStyle
class exporter_routing exporterStyle
class gateway,storage tempoStyle
%% Hide the structural boxes for the rows so they look seamless
style row1 fill:none,stroke:none
style row2 fill:none,stroke:none&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;&lt;strong&gt;Key Points in the Flow:&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Application&lt;/strong&gt; in team-a namespace sends traces to local collector&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Local Collector&lt;/strong&gt; forwards to &lt;strong&gt;Central Collector’s OTLP receiver&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pipeline traces/in&lt;/strong&gt; processes the traces sequentially:&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Detects OpenShift environment info&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adds Kubernetes metadata (namespace, pod, labels)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Applies memory limits&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Batches traces for efficiency&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Routing Connector&lt;/strong&gt; examines the namespace attribute and routes to the correct tenant pipeline&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pipeline traces/tenantA&lt;/strong&gt; receives from connector and exports to TempoStack&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exporter&lt;/strong&gt; adds authentication (bearer token) and tenant ID header (X-Scope-OrgID: tenantA)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TempoStack&lt;/strong&gt; receives and stores traces in the appropriate tenant storage&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This architecture ensures complete isolation between tenants while maintaining a single, centralized collection point.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_complete_opentelemetry_collector_manifest"&gt;Complete OpenTelemetry Collector Manifest&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s put everything together in the complete OpenTelemetry Collector Manifest. The following defines the Central OpenTelemetry Collector:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;mode&lt;/strong&gt;: The deployment mode to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;replicas&lt;/strong&gt;: The number of replicas to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;serviceAccount&lt;/strong&gt;: The ServiceAccount to use, created in the previous step.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;config&lt;/strong&gt;: The configuration of the OpenTelemetry Collector.&lt;/p&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;receivers&lt;/strong&gt;: The receivers to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;processors&lt;/strong&gt;: The processors to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;connectors&lt;/strong&gt;: The connectors to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;exporters&lt;/strong&gt;: The exporters to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;extensions&lt;/strong&gt;: The extensions to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;service&lt;/strong&gt;: The service to use.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: tempostack
spec:
mode: deployment
replicas: 1
serviceAccount: otel-collector
config:
# Receivers - accept traces from local collectors
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Processors - enrich and batch traces
processors:
# Add Kubernetes metadata
k8sattributes: {}
# Detect OpenShift/K8s environment info
resourcedetection:
detectors:
- openshift
timeout: 2s
# Memory protection
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
# Batch for efficiency
batch:
send_batch_size: 10000
timeout: 10s
# Connectors - route traces to different pipelines
connectors:
routing/traces:
default_pipelines:
- traces/Default
error_mode: ignore
table:
# Route team-a namespace to tenantA
- statement: route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-a&amp;#34;
pipelines:
- traces/tenantA
# Route team-b namespace to tenantB
- statement: route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-b&amp;#34;
pipelines:
- traces/tenantB
# Exporters - send to TempoStack
exporters:
# Default tenant exporter
otlp/Default:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: dev
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Tenant A exporter
otlp/tenantA:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantA
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Tenant B exporter
otlp/tenantB:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantB
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Extensions - authentication
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/token
# Service pipelines
service:
extensions:
- bearertokenauth
pipelines:
# Incoming traces pipeline
traces/in:
receivers:
- otlp
processors:
- resourcedetection
- k8sattributes
- memory_limiter
- batch
exporters:
- routing/traces
# Default tenant pipeline
traces/Default:
receivers:
- routing/traces
exporters:
- otlp/Default
# Tenant A pipeline
traces/tenantA:
receivers:
- routing/traces
exporters:
- otlp/tenantA
# Tenant B pipeline
traces/tenantB:
receivers:
- routing/traces
exporters:
- otlp/tenantB&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_gitops_deployment"&gt;GitOps Deployment&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;While the above is good for quick tests, it always makes sense to have a proper GitOps deployment. I have created a Chart and GitOps configuration that will:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploy the OTEL Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the the Central Collector instance&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following sources will be used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://charts.stderr.at/" target="_blank" rel="noopener"&gt;Helm Repository&lt;/a&gt; - to fetch the Helm Chart for the OTEL Operator including required Sub-Charts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/tree/main/clusters/management-cluster/setup-otel-operator" target="_blank" rel="noopener"&gt;Setup OTEL Operator&lt;/a&gt; - To deploy and configure the OpenTelemetry Operator.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Feel free to clone or use whatever you need.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following Sub-Charts are used:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-operator" target="_blank" rel="noopener"&gt;helper-operator&lt;/a&gt; (version ~1.0.18) - Installs the OpenTelemetry Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/helper-status-checker" target="_blank" rel="noopener"&gt;helper-status-checker&lt;/a&gt; (version ~4.0.0) - Verifies the status of the OpenTelemetry Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/rh-build-of-opentelemetry" target="_blank" rel="noopener"&gt;opentelemetry-collector&lt;/a&gt; (version ~1.0.0) - Creates the Central OpenTelemetry Collector instance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/tpl" target="_blank" rel="noopener"&gt;tpl&lt;/a&gt; (version ~1.0.0) - Template Library&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The following Argo CD Application will deploy the OTEL Operator:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: setup-otel-operator
namespace: openshift-gitops
spec:
destination:
name: in-cluster &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openshift-opentelemetry-operator &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
info:
- name: Description
value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
project: in-cluster &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
source:
path: clusters/management-cluster/setup-otel-operator &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/openshift-clusterconfig-gitops&amp;#39; &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
targetRevision: main
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Target cluster, here the local cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the target cluster, here the Operator will be installed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Project of the target cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;URL to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will create the Argo CD Application that can be synchronized with the cluster:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/otel-argocd.png?width=640px" alt="OTEL Deployment via Argo CD"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_complete_values_file"&gt;Complete values file&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To see the whole file expand the code:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;---
otel: &amp;amp;channel-otel stable
otel-namespace: &amp;amp;otel-namespace openshift-opentelemetry-operator
######################################
# SUBCHART: helper-operator
# Operators that shall be installed.
######################################
helper-operator:
operators:
opentelemetry-product:
enabled: true
namespace:
name: *otel-namespace
create: true
subscription:
channel: *channel-otel
approval: Automatic
operatorName: opentelemetry-product
source: redhat-operators
sourceNamespace: openshift-marketplace
operatorgroup:
create: true
notownnamespace: true
########################################
# SUBCHART: helper-status-checker
# Verify the status of a given operator.
########################################
helper-status-checker:
enabled: true
approver: false
checks:
- operatorName: opentelemetry-product
namespace:
name: *otel-namespace
syncwave: 1
serviceAccount:
name: &amp;#34;status-checker-otel&amp;#34;
rh-build-of-opentelemetry:
#########################################################################################
# namespace ... disabled here, since we deployed it via Tempo already
#########################################################################################
namespace:
name: tempostack
create: false
#########################################################################################
# OPENTELEMETRY COLLECTOR - Production Configuration
#########################################################################################
collector:
enabled: true
name: otel
mode: deployment
replicas: 1
serviceAccount: otel-collector
managementState: managed
resources: {}
tolerations: []
config:
connectors:
routing/traces:
default_pipelines:
- traces/Default
error_mode: ignore
table:
- pipelines:
- traces/tenantA
statement: &amp;#39;route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-a&amp;#34;&amp;#39;
- pipelines:
- traces/tenantB
statement: &amp;#39;route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-b&amp;#34;&amp;#39;
- pipelines:
- traces/tenantC
statement: &amp;#39;route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-c&amp;#34;&amp;#39;
- pipelines:
- traces/tenantD
statement: &amp;#39;route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;team-d&amp;#34;&amp;#39;
- pipelines:
- traces/tenantX
statement: &amp;#39;route() where attributes[&amp;#34;k8s.namespace.name&amp;#34;] == &amp;#34;mockbin-1&amp;#34;&amp;#39;
receivers:
# OTLP receivers for traces, metrics, and logs
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Jaeger receiver (if migrating from Jaeger)
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
processors:
batch:
send_batch_size: 10000
timeout: 10s
k8sattributes: {}
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
resourcedetection:
detectors:
- openshift
timeout: 2s
exporters:
otlp/tenantX:
auth:
authenticator: bearertokenauth
endpoint: &amp;#39;tempo-simplest-gateway:8090&amp;#39;
headers:
X-Scope-OrgID: tenantX
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp:
auth:
authenticator: bearertokenauth
endpoint: &amp;#39;tempo-simplest-gateway:8090&amp;#39;
headers:
X-Scope-OrgID: prod
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/tenantA:
auth:
authenticator: bearertokenauth
endpoint: &amp;#39;tempo-simplest-gateway:8090&amp;#39;
headers:
X-Scope-OrgID: tenantA
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/tenantB:
auth:
authenticator: bearertokenauth
endpoint: &amp;#39;tempo-simplest-gateway:8090&amp;#39;
headers:
X-Scope-OrgID: tenantB
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/Default:
auth:
authenticator: bearertokenauth
endpoint: &amp;#39;tempo-simplest-gateway:8090&amp;#39;
headers:
X-Scope-OrgID: dev
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/token
service:
extensions:
- bearertokenauth
pipelines:
traces/Default:
exporters:
- otlp/Default
receivers:
- routing/traces
traces/in:
exporters:
- routing/traces
processors:
- resourcedetection
- k8sattributes
- memory_limiter
- batch
receivers:
- otlp
traces/tenantA:
exporters:
- otlp/tenantA
receivers:
- routing/traces
traces/tenantB:
exporters:
- otlp/tenantB
receivers:
- routing/traces&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_the_central_opentelemetry_collector"&gt;The Central OpenTelemetry Collector&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;With this deployment a single Pod (because we have set the replica count to 1) is running in the namespace &lt;strong&gt;tempostack&lt;/strong&gt; with the name &lt;strong&gt;otel-collector&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc get pods -n tempostack | grep otel-col
otel-collector-75f8794dc6-rbq26 1/1 Running 0 52m&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_stay_tuned"&gt;Stay Tuned&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The next article will cover deploying an example application (Mockbin) and configuring &lt;strong&gt;Local OpenTelemetry Collectors&lt;/strong&gt; in application namespaces to send traces to this Central Collector. We’ll also demonstrate how the namespace-based routing automatically directs traces to the correct TempoStack tenants.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>The Hitchhiker's Guide to Observability - Example Applications - Part 4</title><link>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-26-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part4/</link><pubDate>Wed, 26 Nov 2025 00:00:00 +0000</pubDate><guid>https://blog.stderr.at/openshift-platform/observability/observability/2025-11-26-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part4/</guid><description>&lt;div class="paragraph"&gt;
&lt;p&gt;With the &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-23-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part1/"&gt;architecture defined&lt;/a&gt;, &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-24-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part2/"&gt;TempoStack deployed&lt;/a&gt;, and the &lt;a href="https://blog.stderr.at/day-2/observability/2025-11-25-hitchhikers-guide-to-distributed-tracing-with-opentelemetry-and-tempostack-part3/"&gt;Central Collector configured&lt;/a&gt;, we’re now ready to complete the distributed tracing pipeline. It’s time to deploy real applications and see traces flowing through the entire system!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In this fourth installment, we’ll focus on the &lt;strong&gt;application layer&lt;/strong&gt; - deploying &lt;strong&gt;Local OpenTelemetry Collectors&lt;/strong&gt; in team namespaces and configuring example applications to generate traces. You’ll see how applications automatically get enriched with Kubernetes metadata, how namespace-based routing directs traces to the correct TempoStack tenants, and how the entire two-tier architecture comes together.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We’ll deploy an example application (&lt;strong&gt;Mockbin&lt;/strong&gt;) into two different namespaces. Together with the application, we will deploy a local OpenTelemetry Collector. One with sidecar mode, one with deployment mode.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You’ll learn how to configure local collectors to add namespace attributes, forward traces to the central collector, and verify that traces appear in the UI with full Kubernetes context.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;By the end of this article, you’ll have a fully functional distributed tracing system with applications generating traces, local collectors forwarding them, and the central collector routing everything to the appropriate TempoStack tenants. Let’s bring this architecture to life!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_application_mockbin_step_by_step_implementation"&gt;Application Mockbin - Step-by-Step Implementation&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Before starting, ensure you have the following Systems or Operators installed (used Operator versions in this article):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenShift or Kubernetes cluster (OpenShift v4.20)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Red Hat build of OpenTelemetry installed (v0.135.0-1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Tempo Operator installed (v0.18.0-2)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;S3-compatible storage (for TempoStack, based on OpenShift Data Foundation)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cluster Observability Operator (v1.3.0) for now this Operator is only used to extend the OpenShift UI with the tracing UI)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_mockbin_application_introduction"&gt;Mockbin Application Introduction&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Mockbin Application was created and provided by &lt;a href="https://www.linkedin.com/in/michaela-lang-900603b9/" target="_blank" rel="noopener"&gt;Michaela Lang&lt;/a&gt;. She is doing a lot of testings with Service Mesh and Observability. I forked the source code for the Mockbin application to &lt;a href="https://github.com/tjungbauer/mockbin" target="_blank" rel="noopener"&gt;Mockbin&lt;/a&gt; and created an images to test everything at: &lt;a href="https://quay.io/repository/tjungbau/mockbin?tab=tags" target="_blank" rel="noopener"&gt;Mockbin Image&lt;/a&gt;. Mockbin is a simple, OpenTelemetry-instrumented HTTP testing service built with Python’s &lt;code&gt;aiohttp&lt;/code&gt; async framework. It serves as both a demonstration tool and a practical testing service for distributed tracing infrastructure.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_why_mockbin"&gt;Why Mockbin?&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Unlike simple &amp;#34;hello world&amp;#34; examples, Mockbin demonstrates &lt;strong&gt;real-world OpenTelemetry implementation patterns&lt;/strong&gt; that you can apply to your own Python applications. It shows how to properly instrument an async Python web service with full observability capabilities.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_key_features"&gt;Key Features&lt;/h3&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Full OpenTelemetry Integration&lt;/strong&gt;: Implements the complete observability stack - traces, metrics, and logs&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatic Instrumentation&lt;/strong&gt;: Uses OpenTelemetry’s auto-instrumentation for &lt;code&gt;aiohttp&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Manual Span Creation&lt;/strong&gt;: Demonstrates how to create custom spans with attributes and events&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Trace Context Propagation&lt;/strong&gt;: Supports both W3C Trace Context (&lt;code&gt;traceparent&lt;/code&gt;) and Zipkin B3 (&lt;code&gt;x-b3-*&lt;/code&gt;) formats&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OTLP Export&lt;/strong&gt;: Sends all telemetry data via OpenTelemetry Protocol (OTLP) to collectors&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Prometheus Metrics with Exemplars&lt;/strong&gt;: Links high-cardinality traces to aggregated metrics&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Structured Logging via OTLP&lt;/strong&gt;: Logs are automatically correlated with traces&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nested Spans&lt;/strong&gt;: Creates parent-child span relationships for complex operations&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exception Recording&lt;/strong&gt;: Captures and records exceptions with full stack traces&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_technical_architecture"&gt;Technical Architecture&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The application consists of several Python modules:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;proxy.py&lt;/code&gt;&lt;/strong&gt;: Main application with HTTP endpoint handlers&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;tracing.py&lt;/code&gt;&lt;/strong&gt;: OpenTelemetry configuration and trace context management&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;promstats.py&lt;/code&gt;&lt;/strong&gt;: Prometheus metrics collection with trace exemplars&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;logfilter.py&lt;/code&gt;&lt;/strong&gt;: OTLP logging with automatic trace correlation&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_http_endpoints_for_testing"&gt;HTTP Endpoints for Testing&lt;/h3&gt;
&lt;table class="tableblock frame-all grid-all fit-content"&gt;
&lt;colgroup&gt;
&lt;col/&gt;
&lt;col/&gt;
&lt;col/&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Endpoint&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Purpose&lt;/th&gt;
&lt;th class="tableblock halign-left valign-top"&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Echo service - returns headers and environment variables&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Basic trace generation&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/logging/*&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Generates multiple log entries linked to trace&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Demonstrate trace-to-log correlation&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/proxy/*&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Chains multiple HTTP requests with nested spans&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Test distributed trace propagation&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/exception/{status}&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Intentionally raises exception and returns custom status code&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Test error trace capture&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/webhook/alert-receiver&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Converts AlertManager webhooks to traces&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Alert-to-trace correlation&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/outlier&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Enables/disables failure mode (PUT/DELETE)&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Test Service Mesh outlier detection&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/metrics&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Exposes Prometheus metrics&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Metrics scraping&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;/health&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Health check endpoint&lt;/p&gt;&lt;/td&gt;
&lt;td class="tableblock halign-left valign-top"&gt;&lt;p class="tableblock"&gt;Kubernetes liveness/readiness probes&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_opentelemetry_implementation_highlights"&gt;OpenTelemetry Implementation Highlights&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The application demonstrates several critical OpenTelemetry patterns:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="olist arabic"&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Middleware-Based Instrumentation:&lt;/strong&gt;
Every HTTP request is automatically wrapped in a trace span via &lt;code&gt;aiohttp&lt;/code&gt; middleware, capturing request metadata (method, URL, headers, etc.) as span attributes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Context Propagation:&lt;/strong&gt;
Incoming requests have their trace context extracted from HTTP headers, and outgoing requests inject the context to maintain trace continuity across service boundaries.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Resource Attributes:&lt;/strong&gt;
Service metadata (name, namespace, version) is attached to all telemetry data for proper identification in the backend.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Manual Span Creation:&lt;/strong&gt;
Shows how to create nested spans for complex operations, add custom attributes, record events, and set span status (OK/ERROR).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_configuration_via_environment_variables"&gt;Configuration via Environment Variables&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The application is configured entirely through environment variables, making it easy to adapt to different environments, for example:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;# OpenTelemetry Configuration
OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol-collector:4317
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=http://otelcol-collector:4317
OTEL_SPAN_SERVICE=mockbin&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
We will test OpenTelemetry Collector in deployment mode and in sidecar mode. When using sidecar mode, then the sidecar will try to automatically inject the OpenTelemetry Collector into the pod.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_deploy_mockbin_application_opentelemetry_collector_team_a"&gt;Deploy Mockbin Application &amp;amp; OpenTelemetry Collector - team-a&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let’s start with the deployment of the Mockbin Application and the OpenTelemetry Collector in the team-a namespace. Here we will use the OpenTelemetry Collector in deployment mode.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
The deployment in sidecar mode will be done in the team-b namespace and is almost the same, just with the difference that we will use a different mode in the OTC resource and that we do not need to take care of the environment variables for the OTC.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1a_use_kustomize_manually"&gt;Step 1a: Use Kustomize Manually&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can deploy the Mockbin Application and the OpenTelemetry Collector manually using Kustomize or by using a GitOps tool like ArgoCD (preferred).
In the repository &lt;a href="https://github.com/tjungbauer/mockbin" target="_blank" rel="noopener"&gt;Mockbin&lt;/a&gt; your will find the sub-folder &lt;code&gt;k8s&lt;/code&gt; with the Kustomize files for the Mockbin Application and the OpenTelemetry Collector.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;You can manually deploy the application:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_step_1a_1_create_namespace_team_a"&gt;Step 1a.1: Create Namespace team-a&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First create the namespace team-a.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: Namespace
metadata:
name: team-a &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace name&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_step_1a_2_deploy_the_mockbin_application"&gt;Step 1a.2: Deploy the Mockbin Application&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Use the following command to deploy the application (assuming you cloned the repository to your local machine):&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;kustomize build . | kubectl apply -f -&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will deploy the required resources:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ServiceAccount&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Route&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deployment using the image: quay.io/tjungbau/mockbin:1.8.1&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
It is possible to kustomize the deployment by modifying the &lt;code&gt;kustomization.yaml&lt;/code&gt; file. For example, you can change the image, or change from Route to Ingress object.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1b_use_argocd"&gt;Step 1b: Use ArgoCD&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Above can be achieved by using ArgoCD, which allows a more declarative approach. Create the following ArgoCD Application:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mockbin-team-a &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openshift-gitops &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
spec:
destination:
namespace: team-a &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
server: &amp;#39;https://kubernetes.default.svc&amp;#39; &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
info:
- name: Description
value: Deploy Mockbin in team-a namespace
project: in-cluster
source:
path: k8s/ &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/mockbin&amp;#39; &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
targetRevision: main &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
syncPolicy:
syncOptions:
- CreateNamespace=true &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the Argo CD Application resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the ArgoCD Application resource, here it is &lt;strong&gt;openshift-gitops&lt;/strong&gt;. In your environment it might be different.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the target application&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes API server URL. Here it is the local cluster where Argo CD is hosted.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the Kustomize files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;URL to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Target revision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the namespace if it does not exist&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Argo CD will leverage Kustomize to render the resources. It is the same as you would do manually, with the exception that the Namespace will be created automatically. The approach above will deploy the required resources:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Namespace&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ServiceAccount&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Route&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deployment&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In the Argo CD UI you can see the application and the resources that have been deployed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/mockbin-argocd.png?width=640px" alt="Argo CD Application for Mockbin"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_verify_the_deployment"&gt;Step 2: Verify the Deployment&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Verify the Pods, Service and Route of the Mockbin Application:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;kubectl get pods -n team-a
NAME READY STATUS RESTARTS AGE
pod/mockbin-c4587558b-ftvsd 2/2 Running 0 3d15h
pod/mockbin-c4587558b-zrxgc 2/2 Running 0 3d15h
kubectl get services -n team-a
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mockbin ClusterIP 172.30.202.223 &amp;lt;none&amp;gt; 8080/TCP 3d16h
kubectl get routes -n team-a
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/mockbin mockbin-team-a.apps.ocp.aws.ispworld.at mockbin http edge/Redirect None&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Access the route URL to see the Mockbin Application:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;curl -k https://mockbin-team-a.apps.ocp.aws.ispworld.at&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will return a response looking like this:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-json hljs" data-lang="json"&gt;{&amp;#34;headers&amp;#34;: {&amp;#34;User-Agent&amp;#34;: &amp;#34;curl/8.7.1&amp;#34;, &amp;#34;Accept&amp;#34;: &amp;#34;*/*&amp;#34;, &amp;#34;Host&amp;#34;: &amp;#34;mockbin-team-a.apps.ocp.aws.ispworld.at&amp;#34;, .... lot&amp;#39;s of header stuff&amp;#34;}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_deploy_the_local_opentelemetry_collector_mode_deployment"&gt;Step 3: Deploy the Local OpenTelemetry Collector - Mode Deployment&lt;/h3&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
All steps below can be applied using GitOps again, using the Chart &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/rh-build-of-opentelemetry" target="_blank" rel="noopener"&gt;rh-build-of-opentelemetry&lt;/a&gt;. An example of a GitOps configuration (for the Central Collector) can be found at &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/tree/main/clusters/management-cluster/setup-otel-operator" target="_blank" rel="noopener"&gt;Setup OTEL Operator&lt;/a&gt;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_a_serviceaccount_for_the_opentelemetry_collector"&gt;Create a ServiceAccount for the OpenTelemetry Collector&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let us first create a ServiceAccount for the OpenTelemetry Collector.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: ServiceAccount
metadata:
name: otelcol-agent
namespace: team-a&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_the_opentelemetry_collector_resource"&gt;Create the OpenTelemetry Collector Resource&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create the following OpenTelemetry Collector Resource. The mode shall be &amp;#34;deployment&amp;#34; The serviceAccount shall be the one we created in the previous step, and the namespace is &amp;#34;team-a&amp;#34;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otelcol-agent &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: team-a &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
spec:
mode: deployment &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
replicas: 1 &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
serviceAccount: otelcol-agent &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
config:
# Receivers - accept traces from applications
receivers: &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Processors
processors: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
# Add namespace identifier
attributes:
actions:
- action: insert
key: k8s.namespace.name
value: team-a
# Batch for efficiency
batch: {}
# Exporters - forward to central collector
exporters:
otlp/central: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
endpoint: otel-collector.tempostack.svc.cluster.local:4317
tls:
insecure: true
# Service pipeline
service:
pipelines: &lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;(9)&lt;/b&gt;
traces:
receivers:
- otlp
processors:
- batch
- attributes
exporters:
- otlp/central&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace *team-a*of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Mode &lt;strong&gt;deployment&lt;/strong&gt; of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Number of replicas of the OpenTelemetry Collector Resource. For HA setup increase the number of replicas.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;ServiceAccount of the OpenTelemetry Collector Resource, that was created in the previous step.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Receivers of the OpenTelemetry Collector Resource. We will receive traces using the OTLP protocol on gRPC and HTTP.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Processors of the OpenTelemetry Collector Resource. We will add the namespace identifier as an attribute and batch the traces for efficiency.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Exporters of the OpenTelemetry Collector Resource. We will forward the traces to the Central Collector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;9&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Service pipeline of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Since we choose the &lt;strong&gt;deployment&lt;/strong&gt; mode, the OpenTelemetry Collector will be deployed as a Deployment resource.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;oc deployment all -n team-a
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/otelcol-agent-collector 1/1 1 1 3d2h&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_4_cool_and_now_what_environment_variables"&gt;Step 4: Cool, and now what? - Environment Variables&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you came that far, then you have deployment TempoStack, the Central Collector, the Mockbin Application and the Local Collector.
However, you will not receive any traces yet. We need to tell the Mockbin Application WHERE to send the traces.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To do this, we need to set the environment variables for the Mockbin Deployment resource. Edit the Deployment resource and add the following environment variables:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt; env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: &amp;#39;http://otelcol-agent-collector:4317&amp;#39; &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: grpc &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
- name: OTEL_SERVICE_NAME
value: mockbin &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The endpoint (service) of the Local Collector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The protocol we would like to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;The service name of the Mockbin Application.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will trigger a restart of the Deployment. Once done, the environment variables should show up:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;curl -k https://mockbin-team-a.apps.ocp.aws.ispworld.at
{&amp;#34;headers&amp;#34;: {&amp;#34;...., &amp;#34;OTEL_EXPORTER_OTLP_ENDPOINT&amp;#34;: &amp;#34;http://otelcol-agent-collector:4317&amp;#34;, &amp;#34;OTEL_EXPORTER_OTLP_PROTOCOL&amp;#34;: &amp;#34;grpc&amp;#34;, ....&amp;#34;}}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will now start sending traces to the Local Collector and from there to the Central Collector.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_traces_traces_traces"&gt;Traces, Traces, Traces&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now we have a working setup. We have a Central Collector, a Local Collector and a Mockbin Application.
Our application is permanently sending requests to the Local Collector.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;We can review the traces in the OpenShift UI: &lt;strong&gt;Observe &amp;gt; Traces&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Select the tempostack instance and select the tenant &lt;strong&gt;tenantA&lt;/strong&gt;. If everything is working, you should see traces from the Mockbin Application.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock warning"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-warning" title="Warning"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
If no traces show up, then most probably the RBAC rules are not configured correctly or the environment variables are not set correctly. You can check the logs of the OTC pods for errors.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/mockbin-1-traces.png?width=1200px" alt="Mock Team A - Traces"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect1"&gt;
&lt;h2 id="_deploy_mockbin_application_opentelemetry_collector_team_b"&gt;Deploy Mockbin Application &amp;amp; OpenTelemetry Collector - team-b&lt;/h2&gt;
&lt;div class="sectionbody"&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let us now deploy the Mockbin Application and the OpenTelemetry Collector in the team-b namespace. Here we will use the OpenTelemetry Collector in sidecar mode.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1_deploy_the_application_using_argo_cd"&gt;Step 1: Deploy the Application using Argo CD&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_1b_use_argocd_2"&gt;Step 1b: Use ArgoCD&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Above can be achieved by using ArgoCD, which allows a more declarative approach. Create the following ArgoCD Application (This is the same approach as above)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mockbin-team-b &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: openshift-gitops &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
spec:
destination:
namespace: team-b &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
server: &amp;#39;https://kubernetes.default.svc&amp;#39; &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
info:
- name: Description
value: Deploy Mockbin in team-b namespace
project: in-cluster
source:
path: k8s/ &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
repoURL: &amp;#39;https://github.com/tjungbauer/mockbin&amp;#39; &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
targetRevision: main &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
syncPolicy:
syncOptions:
- CreateNamespace=true &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the Argo CD Application resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the ArgoCD Application resource, here it is &lt;strong&gt;openshift-gitops&lt;/strong&gt;. In your environment it might be different.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace of the target application&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes API server URL. Here it is the local cluster where Argo CD is hosted.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Path to the Kustomize files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;URL to the Git repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Target revision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Create the namespace if it does not exist&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_2_deploy_the_local_opentelemetry_collector_sidecar_deployment"&gt;Step 2: Deploy the Local OpenTelemetry Collector - Sidecar Deployment&lt;/h3&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
All steps below can be applied using GitOps again, using the Chart &lt;a href="https://github.com/tjungbauer/helm-charts/tree/main/charts/rh-build-of-opentelemetry" target="_blank" rel="noopener"&gt;rh-build-of-opentelemetry&lt;/a&gt;. An example of a GitOps configuration (for the Central Collector) can be found at &lt;a href="https://github.com/tjungbauer/openshift-clusterconfig-gitops/tree/main/clusters/management-cluster/setup-otel-operator" target="_blank" rel="noopener"&gt;Setup OTEL Operator&lt;/a&gt;.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_a_serviceaccount_for_the_opentelemetry_collector_2"&gt;Create a ServiceAccount for the OpenTelemetry Collector&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Let us first create a ServiceAccount for the OpenTelemetry Collector.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: ServiceAccount
metadata:
name: otelcol-agent
namespace: team-b&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect3"&gt;
&lt;h4 id="_create_the_opentelemetry_collector_resource_2"&gt;Create the OpenTelemetry Collector Resource&lt;/h4&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Create the following OpenTelemetry Collector Resource. The mode shall be &amp;#34;sidecar&amp;#34; The serviceAccount shall be the one we created in the previous step, and the namespace is &amp;#34;team-b&amp;#34;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otelcol-agent &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
namespace: team-b &lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;(2)&lt;/b&gt;
spec:
mode: sidecar &lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;(3)&lt;/b&gt;
replicas: 1 &lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;(4)&lt;/b&gt;
serviceAccount: otelcol-agent &lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;(5)&lt;/b&gt;
config:
# Receivers - accept traces from applications
receivers: &lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;(6)&lt;/b&gt;
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Processors
processors: &lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;(7)&lt;/b&gt;
# Add namespace identifier
attributes:
actions:
- action: insert
key: k8s.namespace.name
value: team-b
# Batch for efficiency
batch: {}
# Exporters - forward to central collector
exporters:
otlp/central: &lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;(8)&lt;/b&gt;
endpoint: otel-collector.tempostack.svc.cluster.local:4317
tls:
insecure: true
# Service pipeline
service:
pipelines: &lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;(9)&lt;/b&gt;
traces:
receivers:
- otlp
processors:
- batch
- attributes
exporters:
- otlp/central&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Name of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="2"&gt;&lt;/i&gt;&lt;b&gt;2&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Namespace *team-a*of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="3"&gt;&lt;/i&gt;&lt;b&gt;3&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Mode &lt;strong&gt;sidecar&lt;/strong&gt; of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="4"&gt;&lt;/i&gt;&lt;b&gt;4&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Number of replicas of the OpenTelemetry Collector Resource. For HA setup increase the number of replicas.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="5"&gt;&lt;/i&gt;&lt;b&gt;5&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;ServiceAccount of the OpenTelemetry Collector Resource, that was created in the previous step.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="6"&gt;&lt;/i&gt;&lt;b&gt;6&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Receivers of the OpenTelemetry Collector Resource. We will receive traces using the OTLP protocol on gRPC and HTTP.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="7"&gt;&lt;/i&gt;&lt;b&gt;7&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Processors of the OpenTelemetry Collector Resource. We will add the namespace identifier as an attribute and batch the traces for efficiency.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="8"&gt;&lt;/i&gt;&lt;b&gt;8&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Exporters of the OpenTelemetry Collector Resource. We will forward the traces to the Central Collector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="9"&gt;&lt;/i&gt;&lt;b&gt;9&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Service pipeline of the OpenTelemetry Collector Resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_step_3_enable_sidecar_injection_for_team_b"&gt;Step 3: Enable Sidecar Injection for Team-B&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The big advantage of the &lt;strong&gt;sidecar&lt;/strong&gt; mode is that the OpenTelemetry Collector is deployed as a sidecar container in the same pod as the application. This happens fully automatically, means you do not need to set any extra environment variables or anything like that.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;To enable the sidecar injection, we need to add the following annotation to the namespace. This will automatically inject the OpenTelemetry Collector as a sidecar container into the ALL pods of the namespace:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Do not forget this to add it into GitOps process :)
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;First verify the current state of the deployment:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc deployment all -n team-b
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/otelcol-agent-collector 1/1 1 1 3d2h&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;As you can see one container out of 1 is running (Read 1/1)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now lets modify the namespace to enable the sidecar injection:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: v1
kind: Namespace
metadata:
name: team-b
annotations:
sidecar.opentelemetry.io/inject: &amp;#34;true&amp;#34;&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
This can also be done by annotate the Deployment resource. In this case, the OpenTelemetry Collector will be injected into the specific pod.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Restart the deployment to apply the changes:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;oc rollout restart deployment/mockbin -n team-b
deployment.apps/mockbin restarted&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Now, when you check again, the sidecar container should be injected into the pods. The Deployment has now 2/2 containers ready:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-bash hljs" data-lang="bash"&gt;NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mockbin 2/2 1 2 3m53s&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_traces_traces_traces_again"&gt;Traces, Traces, Traces - Again&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Again, we can review the traces in the OpenShift UI: &lt;strong&gt;Observe &amp;gt; Traces&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Select the tempostack instance and select the tenant &lt;strong&gt;tenantB&lt;/strong&gt; this time.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="imageblock"&gt;
&lt;div class="content"&gt;
&lt;img src="https://blog.stderr.at/openshift-platform/observability/images/observability/mockbin-2-traces.png" alt="Mock Team B - Traces"/&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sect2"&gt;
&lt;h3 id="_argo_cd_stays_in_progressing_state"&gt;Argo CD Stays in Progressing State&lt;/h3&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;If you have deployed the Mockbin Application and the OpenTelemetry Collector &lt;strong&gt;as a sidecar&lt;/strong&gt; deployment in the team-b namespace, you might notice that the Argo CD Application stays in the &lt;strong&gt;Progressing&lt;/strong&gt; state.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This is because the OpenTelemetry Collector does not really return a state. The status is simply:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;status:
version: 0.140.0&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;Nothing else …​ thats too less for Argo CD to work with.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;In order to fix this, we need to add a custom health check to the Argo CD instance.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;The Argo CD instance knows the following part:&lt;/p&gt;
&lt;/div&gt;
&lt;div class="listingblock"&gt;
&lt;div class="content"&gt;
&lt;pre class="highlightjs highlight"&gt;&lt;code class="language-yaml hljs" data-lang="yaml"&gt;apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
name: openshift-gitops
namespace: openshift-gitops
spec:
[...]
resourceHealthChecks: &lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;(1)&lt;/b&gt;
- check: |
hs = {}
if obj.status ~= nil then
if obj.status.version ~= nil and obj.status.version ~= &amp;#34;&amp;#34; then
hs.status = &amp;#34;Healthy&amp;#34;
hs.message = &amp;#34;Running version &amp;#34; .. obj.status.version
else
hs.status = &amp;#34;Progressing&amp;#34;
hs.message = &amp;#34;Waiting for version report&amp;#34;
end
else
hs.status = &amp;#34;Progressing&amp;#34;
hs.message = &amp;#34;Status not available&amp;#34;
end
return hs
group: opentelemetry.io
kind: OpenTelemetryCollector
[...]&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="colist arabic"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;i class="conum" data-value="1"&gt;&lt;/i&gt;&lt;b&gt;1&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Custom health check for the OpenTelemetry Collector.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="admonitionblock note"&gt;
&lt;table&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td class="icon"&gt;
&lt;i class="fa icon-note" title="Note"&gt;&lt;/i&gt;
&lt;/td&gt;
&lt;td class="content"&gt;
Above will end up in a ConfigMap that is managed by the OpenShift GitOps operator.
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;div class="paragraph"&gt;
&lt;p&gt;This will make Argo CD happy and the Application will be in the &lt;strong&gt;Progressing&lt;/strong&gt; state.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item></channel></rss>