We will now create the Pipeline and try to trigger it for the first time to verify if our Webhook works as intended.
Goals
The goals of this step are:
Create the Pipeline with a first task
Update the Github repository, to verify if the Webhook works
Verify if the PipelineRun is successful
Create the Pipeline
The Pipeline object is responsible to define the Tasks (steps) that should be executed. Whenever a Pipeline is started a PipelineRun is created that performs each defined Task in the defined order and logs the output. Tasks can run subsequently or in parallel.
Currently, the Pipeline has one task pull-source-code which is defined as a ClusterTask "git-clone". The purpose is to simply pull the source code to the workspace "shared-data".
Name of the Pipeline as referenced in the TriggerTemplate.
2
List of Parameters, hopefully, injected by the EventListener.
3
List of Tasks that will be executed.
4
Name of the Task.
5
Parameters used in this Task.
6
The Reference to the task. Here a ClusterTask named "git-clone" is used.
7
Workspace that shall be used in this Task.
8
Workspaces available in this Pipeline.
The initial Pipeline will now look like the following (Go to: Pipelines > Pipelines > secure-supply-chain)
Figure 1. Initial Pipeline
Our first Run
Now it is time to update something in our Git Repository and verify if everything can be executed successfully.
To update, it is enough to simply add a space in the README.md file and push it to Git.
If the Webhook works as expected, Git will notify our EventListener, which will then trigger the Pipeline.
A PipelineRun is created, that executes all Tasks that are defined in the Pipeline (currently just 1)
You can monitor the progress of the PipelineRun:
Figure 2. PipelineRun Overview
On the Details-page you can see which step is currently executed:
Figure 3. PipelineRun Details
Eventually, the PipelineRun finishes successfully.
Figure 4. PipelineRun Finished
You can analyze the Logs in case of an Error or to get more details of a certain Task:
Figure 5. Task Logs
Summary
We have now created our first Pipeline and tested the GitHub Webhook. Whenever we push changes to the code, Git will notify the EventListener which will trigger the Pipeline with all required Parameters.
A PipelineRun is generated and is executing the defined Tasks. Currently, not much is done, expect cloning the Git repository.
In the next steps, we will evolve our Pipeline to perform security checks and sign our image.
The following 1-minute article is a follow-up to my previous article about how to use Keycloak as an authentication provider for OpenShift. In this article, I will show you how to configure Keycloak and OpenShift for Single Log Out (SLO). This means that when you log out from Keycloak, you will also be logged out from OpenShift automatically. This requires some additional configuration in Keycloak and OpenShift, but it is not too complicated.
I was recently asked about how to use Keycloak as an authentication provider for OpenShift. How to install Keycloak using the Operator and how to configure Keycloak and OpenShift so that users can log in to OpenShift using OpenID. I have to admit that the exact steps are not easy to find, so I decided to write a blog post about it, describing each step in detail. This time I will not use GitOps, but the OpenShift and Keycloak Web Console to show the steps, because before we put it into GitOps, we need to understand what is actually happening.
This article tries to explain every step required so that a user can authenticate to OpenShift using Keycloak as an Identity Provider (IDP) and that Groups from Keycloak are imported into OpenShift. This article does not cover a production grade installation of Keycloak, but only a test installation, so you can see how it works. For production, you might want to consider a proper database (maybe external, but at least with a backup), high availability, etc.).
During my day-to-day business, I am discussing the following setup with many customers: Configure App-of-Apps. Here I try to explain how I use an ApplicationSet that watches over a folder in Git and automatically adds a new Argo CD Application whenever a new folder is found. This works great, but there is a catch: The ApplicationSet uses the same Namespace default for all Applications. This is not always desired, especially when you have different teams working on different Applications.
Recently I was asked by the customer if this can be fixed and if it is possible to define different Namespaces for each Application. The answer is yes, and I would like to show you how to do this.
Classic Kubernetes/OpenShift offer a feature called NetworkPolicy that allows users to control the traffic to and from their assigned Namespace. NetworkPolicies are designed to give project owners or tenants the ability to protect their own namespace. Sometimes, however, I worked with customers where the cluster administrators or a dedicated (network) team need to enforce these policies.
Since the NetworkPolicy API is namespace-scoped, it is not possible to enforce policies across namespaces. The only solution was to create custom (project) admin and edit roles, and remove the ability of creating, modifying or deleting NetworkPolicy objects. Technically, this is possible and easily done. But shifts the whole network security to cluster administrators.
Luckily, this is where AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) comes into play.
Lately I came across several issues where a given Helm Chart must be modified after it has been rendered by Argo CD. Argo CD does a helm template to render a Chart. Sometimes, especially when you work with Subcharts or when a specific setting is not yet supported by the Chart, you need to modify it later … you need to post-render the Chart.
In this very short article, I would like to demonstrate this on a real-live example I had to do. I would like to inject annotations to a Route objects, so that the certificate can be injected. This is done by the cert-utils operator. For the post-rendering the Argo CD repo pod will be extended with a sidecar container, that is watching for the repos and patches them if required.