Everything has a start, this blog as well as the following tutorials. This series of tutorials shall provide a brief and working overview about OpenShift Service Mesh. It is starting with the installation and the first steps, and will continue with advanced settings and configuration options.
OpenShift 4.x and ServiceMesh
UPDATE: At 10th April 2020 Red Hat released Service Mesh version 1.1 which supports:
Istio - 1.4.6
Kiali - 1.12.7
Jaeger - 1.17.1
The following tutorials for OpenShift Service Mesh are based on the official documentation: OpenShift 4.3 Service Mesh and on the Interactive Learning Portal. All operations have been successfully tested on OpenShift 4.3.
Currently OpenShift supports Istio 1.4.6, which shall be updated in one of the future releases.
To learn the basics of Service Mesh, please consult the documentation as they are not repeated here.
It is assumed that OpenShift has access to external registries, like quay.io.
Other resource I can recommend are:
Istio By Example: A very good and brief overview of different topics by Megan O’Keefe.
At the very beginning OpenShift must be installed. This tutorial is based on OpenShift 4.3 and a Lab installation on Hetzner was used.
Moreover, it is assumed that the OpenShift Client and Git are installed on the local system.
During the tutorials an example application in the namespace tutorial will be deployed. This application will contain 3 microservices:
customer (the entry point)
preference
recommendation
This application is also used at the Interactive Learning Portal which can be tested there interactively. However, the training is still based on OpenShift version 3.
Elasticsearch is a very memory intensive application. Per default it will request 16GB of memory which can be reduced on Lab environments.
Link Jaeger and Grafana to Kiali
In the lab environment it happened that Kiali was not able to auto detect Grafana or Jaeger.
This is visible when the link Distributed Tracing is missing in the left menu.
To fix this the ServiceMeshControlPlane object in the istio-system namespace must be updated with 3 lines:
oc edit ServiceMeshControlPlane -n istio-system
kiali: # ADD THE FOLLOWING LINES
dashboard:
grafanaURL: https://grafana-istio-system.apps.<your clustername>
jaegerURL: https://jaeger-istio-system.apps.<your clustername>
This change will take a few minutes to be effective.
Classic Kubernetes/OpenShift offer a feature called NetworkPolicy that allows users to control the traffic to and from their assigned Namespace. NetworkPolicies are designed to give project owners or tenants the ability to protect their own namespace. Sometimes, however, I worked with customers where the cluster administrators or a dedicated (network) team need to enforce these policies.
Since the NetworkPolicy API is namespace-scoped, it is not possible to enforce policies across namespaces. The only solution was to create custom (project) admin and edit roles, and remove the ability of creating, modifying or deleting NetworkPolicy objects. Technically, this is possible and easily done. But shifts the whole network security to cluster administrators.
Luckily, this is where AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) comes into play.
Lately I came across several issues where a given Helm Chart must be modified after it has been rendered by Argo CD. Argo CD does a helm template to render a Chart. Sometimes, especially when you work with Subcharts or when a specific setting is not yet supported by the Chart, you need to modify it later … you need to post-render the Chart.
In this very short article, I would like to demonstrate this on a real-live example I had to do. I would like to inject annotations to a Route objects, so that the certificate can be injected. This is done by the cert-utils operator. For the post-rendering the Argo CD repo pod will be extended with a sidecar container, that is watching for the repos and patches them if required.
The article SSL Certificate Management for OpenShift on AWS explains how to use the Cert-Manager Operator to request and install a new SSL Certificate. This time, I would like to leverage the GitOps approach using the Helm Chart cert-manager I have prepared to deploy the Operator and order new Certificates.
I will use an ACME Letsencrypt issuer with a DNS challenge. My domain is hosted at AWS Route 53.
However, any other integration can be easily used.
During a GitOps journey at one point, the question arises, how to update a cluster? Nowadays it is very easy to update a cluster using CLI or WebUI, so why bother with GitOps in that case? The reason is simple: Using GitOps you can be sure that all clusters are updated to the correct, required version and the version of each cluster is also managed in Git.
All you need is the channel you want to use and the desired cluster version. Optionally, you can define the exact image SHA. This might be required when you are operating in a restricted environment.
Argo CD or OpenShift GitOps uses Applications or ApplicationSets to define the relationship between a source (Git) and a cluster. Typically, this is a 1:1 link, which means one Application is using one source to compare the cluster status. This can be a limitation. For example, if you are working with Helm Charts and a Helm repository, you do not want to re-build (or re-release) the whole chart just because you made a small change in the values file that is packaged into the repository. You want to separate the configuration of the chart with the Helm package.
The most common scenarios for multiple sources are (see: Argo CD documentation):
Your organization wants to use an external/public Helm chart
You want to override the Helm values with your own local values
You don’t want to clone the Helm chart locally as well because that would lead to duplication and you would need to monitor it manually for upstream changes.
This small article describes three different ways with a working example and tries to cover the advantages and disadvantages of each of them. They might be opinionated but some of them proved to be easier to use and manage.