As mentioned in our previous post about Falco, Falco is a security
tool to monitor kernel events like system calls or Kubernetes audit
logs to provide real-time alerts.
In this post I'll show to customize Falco for a specific use case.
We would like to monitor the following events:
An interactive shell is opened in a container
Log all commands executed in an interactive shell in a container
Log read and writes to files within an interactive shell inside a container
Log commands execute via `kubectl/oc exec` which leverage the
pod/exec K8s endpoint
Falco is a security tool to monitor kernel events like system calls to
provide real-time alerts. In this post I'll document the steps taken
to get Open Source Falco running on an OpenShift 4.12 cluster.
UPDATE: Use the falco-driver-loader-legacy image for OpenShift 4.12 deployments.
While playing around with Falco (worth another post) I had to force a
MachineConfig update even so the actual configuration of the machine
did not change.
This is my personal summary of experimenting with Red Hat's Multi Cloud Gateway (MCG) based on the upstream Noobaa project. MCG is part of Red Hat's OpenShift Data Foundation (ODF). ODF bundles the upstream projects Ceph and Noobaa.
Overview Noobaa, or the Multicloud Gateway (MCG), is a S3 based data federation tool. It allows you to use S3 backends from various sources and
sync replicate or simply use existing S3 buckets.
An old man tries to refresh his Java skills and does DO378. He fails spectacularly at the first real example but learns a lot on the way.
The exception There is this basic example where you build a minimal REST API for storing speaker data in a database. Quarkus makes this quite easy. You just have to define your database connection properties in resources/application.properties and off you go developing your Java Quarkus REST service:
In Part I of our blog post we covered setting up required resources in Azure. Now we are finally going to set up a private cluster. Private
As review from Part I here is our planned setup, this time including the ARO cluster.
Azure Setup The diagram below depicts our planned setup:
On the right hand side can see the resources required for our lab:
a virtual network (vnet 192.
So we want to play with ARO (Azure Red Hat OpenShift) private clusters. A private cluster is not reachable from the internet (surprise) and is only reachable via a VPN tunnel from other networks.
This blog post describes how we created a site-to-site VPN between a Hetzner dedicated server running multiple VM's via libvirt and Azure.
An upcoming blog post is going to cover the setup of the private ARO cluster.
We had the task of answering various questions related to upgrading Red Hat Quay 3.3 to 3.4 and to 3.5 with the help of the quay-operator.
Thankfully (sic!) everything changed in regards to the Quay operator between Quay 3.3 and Quay 3.4.
So this is a brain dump of the things to consider.
Operator changes With Quay 3.4 the operator was completely reworked and it basically changed from opinionated to very opinionated.