The Hitchhiker's Guide to Observability - Central Collector - Part 3

- Thomas Jungbauer Thomas Jungbauer ( Lastmod: 2025-11-28 ) - 10 min read

image from The Hitchhiker's Guide to Observability - Central Collector - Part 3

With the architecture defined in Part 1 and TempoStack deployed in Part 2, it’s time to tackle the heart of our distributed tracing system: the Central OpenTelemetry Collector. This is the critical component that sits between your application namespaces and TempoStack, orchestrating trace flow, metadata enrichment, and tenant routing.

In this article, we’ll configure the RBAC permissions required for the Central Collector to enrich traces with Kubernetes metadata and deploy the Central OpenTelemetry Collector with its complete configuration. You’ll learn how to set up receivers for accepting traces from local collectors, configure processors to enrich traces with Kubernetes and OpenShift metadata, and implement routing connectors to direct traces to the appropriate TempoStack tenants based on namespace.

To be honest, this was the most challenging part of the entire setup to get right, as you can easily miss a setting, or misconfigure a part of the configuration. But once you understand the configuration, it becomes straightforward to extend and modify.

Central OpenTelemetry Collector - Step-by-Step Implementation

Prerequisites

Before starting, ensure you have completed the previous parts and have:

For all configurations I also created a proper GitOps implementation (of course :)). However, first I would like to show the actual configuration. The GitOps implementation can be found at the end of this article.

Step 1: Verify/Deploy Red Hat Build of OpenTelemetry Operator

Verify if the Operator Red Hat Build of OpenTelemetry is installed and ready. The Operator itself is deployed in the namespace openshift-opentelemetry-operator

OpenTelemetry Operator

Step 2: Configure RBAC for Central Collector

For the OpenTelemetry Collector to function properly with processors like k8sattributes and resourcedetection, it requires cluster-wide read access to Kubernetes resources. Depending on the configuration of the central collector, you might need to configure different RBAC settings to allow the collector to perform specific things. The central collector in our example uses the k8sattributes processor and resourcedetection processor to enrich traces with Kubernetes and OpenShift metadata. These processors require read access to cluster resources.

Why These Permissions Are Needed

The k8sattributes processor enriches telemetry data by querying the Kubernetes API to add metadata such as:

  • Pod information: Pod name, UID, start time

  • Namespace details: Namespace name and labels

  • Deployment context: ReplicaSet and Deployment names

  • Node information: Node name where the pod is running

The resourcedetection processor detects the OpenShift environment by querying:

  • Infrastructure resources: Cluster name, platform type, region (from config.openshift.io/infrastructures)

Without these permissions, traces would lack critical context needed for debugging and analysis.

Always check the latest documentation of the appropriate component to get the most up to date information. I prefer to use separate ClusterRoles for each processor to keep the permissions as granular as possible. However, that causes some overhead, so you might want to combine the permissions into a single ClusterRole.

Create ClusterRole for Kubernetes Attributes

The following ClusterRole has been created:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: k8sattributes-otel-collector
rules:
  # Permissions for k8sattributes processor
  # Allows the collector to read pod, namespace, and replicaset information
  # to enrich traces with Kubernetes metadata
  - verbs:
      - get
      - watch
      - list
    apiGroups:
      - '*'
    resources:
      - pods
      - namespaces
      - replicasets

  # Permissions for resourcedetection processor (OpenShift)
  # Allows detection of OpenShift cluster information
  # such as cluster name, platform type, and region
  - verbs:
      - get
      - watch
      - list
    apiGroups:
      - config.openshift.io
    resources:
      - infrastructures
      - infrastructures/status

Key Points:

  1. Read-Only Access: The collector only needs get, watch, and list verbs (no write permissions)

  2. Cluster-Wide Scope: ClusterRole grants permissions across all namespaces, necessary for monitoring multi-tenant environments

  3. Essential Resources:

    • pods: Source of trace context (which pod generated the span)

    • namespaces: Namespace metadata and labels for routing

    • replicasets: Determine the owning Deployment for better trace attribution

  4. OpenShift Infrastructure: Access to config.openshift.io/infrastructures allows detection of cluster-level properties

Create ClusterRoleBinding

Our OpenTelemetry Collector will use the ServiceAccount otel-collector (in the namespace tempostack) to read the Kubernetes resources. Thus, we need to create a ClusterRoleBinding to grant the necessary permissions to the ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8sattributes-collector-tempo
subjects:
  - kind: ServiceAccount
    name: otel-collector (1)
    namespace: tempostack (2)
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: k8sattributes-otel-collector (3)
1The ServiceAccount that is allowed to write the traces. In this example: otel-collector.
2The namespace of the ServiceAccount. In this example: tempostack.
3The reference to the ClusterRole

Security Note

The above permissions in the ClusterRole follow the principle of least privilege:

  • Only read operations are granted (no create, update, or delete)

  • Access is limited to specific resource types needed for metadata enrichment

  • The ServiceAccount otel-collector is dedicated to the collector and not shared with other applications

Step 3: Create ServiceAccount for Central OpenTelemetry Collector

The ServiceAccount used by the OpenTelemetry Collector. This is the service account that will be used to authenticate to the TempoStack instance. Thus, the Bindings created earlier to write into TempoStack and to read from Kubernetes will be required.

In this article we are installing the central Collector into the same namespace as the TempoStack instance. However, you might want to install it in a different namespace to keep the namespaces separated. Keep an eye on possible Network Policies that might be required to allow the communication between the namespaces.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-collector
  namespace: tempostack

Step 4: Deploy Central Collector

The central collector receives traces from local collectors, enriches them with Kubernetes metadata, and routes them to appropriate TempoStack tenants. For the sake of simplicity, I have taken snippets from the whole Configuration manifest. At the end of this section you will find the whole manifest.

Basic Configuration

The very basic settings of the OpenTelemetry Collector, are the amount of replicas, the ServiceAccount to use and the deployment mode.

The Collector can be deployed in one of the following modes:

  • Deployment (default) - Creates a Deployment with the given numbers of replicas.

  • StatefulSet - Creates a StatefulSet with the given numbers of replicas. Useful for stateful workloads, for example when using the Collector’s File Storage Extension or Tail Sampling Processor.

  • DaemonSet - Creates a DaemonSet with the given numbers of replicas. Useful for scraping telemetry data from every node, for example by using the Collector’s Filelog Receiver to read container logs.

  • Sidecar - Injects the Collector as a sidecar into the pod. Useful for accessing log files inside a container, for example by using the Collector’s Filelog Receiver and a shared volume such as emptyDir.

In the examples in this series of articles we will use the modes: deployment and sidecar.

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: tempostack
spec:
  mode: deployment (1)
  replicas: 1 (2)
  serviceAccount: otel-collector (3)
[...]
1The deployment mode to use.
2The number of replicas to use.
3The ServiceAccount to use, created in the previous step.

Receivers

The Receivers are the components that receive the traces from the local collectors. Receivers accept data in a specified format and translate it into the internal format. In our example we want to receive the traces from the local collectors. For our tests we are using the oltp Receiver, which is collecting traces, metrics and logs by using the OpenTelemetry Protocol (OTLP).

The easiest configuration is:

    receivers:
      otlp: (1)
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317 (2)
          http:
            endpoint: 0.0.0.0:4318 (3)
1The name of the receiver.
2The gRPC endpoint to receive.
3The http endpoint to receive.

Besides the otlp receiver, there are other receivers available. For example:

  • Jaeger

  • Kubernetes Object Receiver

  • Kubelet Stats Receiver

  • Prometheus Receiver

  • Filelog Receiver

  • Journald Receiver

  • Kubernetes Events Receiver

  • Kubernetes Cluster Receiver

  • OpenCensus Receiver

  • Zipkin Receiver

  • Kafka Receiver.

Please check the OpenShift OTEL Receivers documentation for more details.

Processors

The Processors are the components that process the data after it is received and before it is exported. Processors are completely optional, but they are useful to transform, enrich, or filter traces.

The order of processors matters.

The example configuration is using the following processors:

  • k8sattributes: Can add Kubernetes metadata to the traces.

  • resourcedetection: Can detect OpenShift/K8s environment info.

  • memory_limiter: Periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached.

  • batch: Batches the traces for efficiency. This is a very important processor to improve the performance of the Collector.

Additional processors are available. Please check the OpenShift OTEL Processors documentation for more details.
Some processors will requires additional ClusterRole configuration.
    # Processors - enrich and batch traces
    processors:
      # Add Kubernetes metadata
      k8sattributes: {} (1)

      # Detect OpenShift/K8s environment info
      resourcedetection: (2)
        detectors:
          - openshift
        timeout: 2s

      # Memory protection
      memory_limiter: (3)
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15

      # Batch for efficiency
      batch: (4)
        send_batch_size: 10000
        timeout: 10s
1The k8sattributes processor to add Kubernetes metadata to the traces.
2The resourcedetection processor to detect OpenShift/K8s environment info.
3The memory_limiter processor to protect the Collector’s memory usage.
4The batch processor to batch the traces for efficiency.

Connectors

A Connector joins two pipelines together. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.

Several Connectors are available, for example:

  • Count Connector: Counts traces spans, trace span events, metrics, metric data points, and log records.

  • Routing Connector: Routes the traces to different pipelines.

  • Forward Connector: Merges two pipelines of the same type.

  • Spanmetrics Connector: Aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.

The OpenShift OTEL Connectors documentation lists all available Connectors and their configuration options.

In our example we are using the routing Connector, which is able to route the traces to different pipelines based on the namespace. This helps to route the traces to the correct tenant, without the need to configure this in the local OpenTelemetry Collector. (In other words, the project cannot change this setting, because it is configured in the central OpenTelemetry Collector.)

In this example traces from the namespace "team-a" will be routed to the pipeline "tenantA", traces from the namespace "team-b" will be routed to the pipeline "tenantB", and so on.

    # Connectors - route traces to different pipelines
    connectors:
      routing/traces:
        default_pipelines: (1)
          - traces/Default
        error_mode: ignore (2)
        table:
          # Route team-a namespace to tenantA
          - statement: route() where attributes["k8s.namespace.name"] == "team-a" (3)
            pipelines:
              - traces/tenantA
          # Route team-b namespace to tenantB
          - statement: route() where attributes["k8s.namespace.name"] == "team-b"
            pipelines:
              - traces/tenantB
          # Route team-c namespace to tenantC
          - statement: route() where attributes["k8s.namespace.name"] == "team-c"
            pipelines:
              - traces/tenantC
1Destination pipelines for routing the telemetry data for which no routing condition is satisfied.
2Error-handling mode: Defines how the connector handles routing errors:
  • propagate: Logs an error and drops the payload (stops processing)

  • ignore: Logs the error but continues attempting to match subsequent routing rules

  • silent: Same as ignore but without logging

  • Default: propagate

3Route traces from namespace team-a to the pipeline tenantA.

Exporters

The Exporters are the components that export the traces to a destination. Exporters accept data in a specified format and translate it into the destination format. In our example we want to export the traces to the TempoStack instance. The X-Scope-OrgID header is used to identify the tenant and is sent to the TempoStack instance. The authentication is done by using the ServiceAccount token.

Many different Exporters are available. The OpenShift OTEL Exporters documentation lists all available Exporters and their configuration options.

For our tests we are using the otlp Exporter, which will export using the OpenTelemetry Protocol (OTLP):

    exporters:
      # Tenant A exporter
      otlp/tenantA: (1)
        endpoint: tempo-simplest-gateway:8090 (2)
        auth:
          authenticator: bearertokenauth
        headers:
          X-Scope-OrgID: tenantA (3)
        tls:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          insecure_skip_verify: true (4)
          server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local (5)
1Protocol/name of the exporter.
2The endpoint to export to. Here, the endpoint is the address of the TempoStack instance.
3The scope org ID to use.
4Whether to skip certificate verification. I did not bother with certificates in this example.
5The server name override to use. This was just for testing purposes and can be omitted.

Extensions

The Extensions extend the Collector capabilities. In our example we are using the bearertokenauth Extension, which is used to authenticate to the TempoStack instance.

    # Extensions - authentication
    extensions:
      bearertokenauth:
        filename: /var/run/secrets/kubernetes.io/serviceaccount/token

Service:

Components are enabled by adding them into a Pipeline. If a component is not configured in a pipeline, it is not enabled.

In this example we are using the following pipelines (snippets):

  • traces/in: The incoming traces pipeline.

  • traces/tenantA: The tenant A pipeline.

The traces/in pipeline is the incoming traces pipeline. It is used to receive the traces from the local collectors. It leverages the otlp Receiver to receive the traces. It uses the resourcedetection, k8sattributes, memory_limiter, and batch processors to process the traces. And finally, it uses the routing/traces Connector to route the traces.

The routing/traces Connector is used to route the traces to the correct tenant based on the namespace.

The traces/tenantA Pipeline will receive the traces from routing/traces and export the traces to otlp/tenantA which then sends everything to TempoStack to store the traces using the header X-Scope-OrgID: tenantA.

    # Service pipelines
    service:
      extensions:
        - bearertokenauth

      pipelines:
        # Incoming traces pipeline
        traces/in: (1)
          receivers: (2)
            - otlp
          processors: (3)
            - resourcedetection
            - k8sattributes
            - memory_limiter
            - batch
          exporters: (4)
            - routing/traces

        # Tenant A pipeline
        traces/tenantA: (5)
          receivers: (6)
            - routing/traces
          exporters: (7)
            - otlp/tenantA
1The incoming traces pipeline. It takes the traces from the otlp receiver.
2The receivers to use.
3The processors to use.
4The exporters to use. It is exporting the data to the routing connector.
5The tenant A pipeline.
6The receivers to use.
7The exporters to use. It is exporting to the exporter, that will send the data to TempoStack.
The service automatically creates Kubernetes Services for the receivers. The Central Collector will be accessible at otel-collector.tempostack.svc.cluster.local:4317 (gRPC) and :4318 (HTTP) for local collectors to send traces.

Data Flow Visualization

To better understand how traces flow through the Central Collector, the following Mermaid diagram visualizes the complete journey from application to storage using team-a as an example:

---
title: "Central Collector Data Flow (Wrapped Layout)"
config:
  theme: 'dark'
---
flowchart LR

%% --------------------------
%% Column 1: Local Collector
%% --------------------------
subgraph local["(team-a namespace)"]
    app["Application<br/>(team-a)"]
end

%% --------------------------
%% Column 2: Central Collector
%% --------------------------
subgraph central["Central Collector (tempostack namespace)"]
    %% We force this specific column to stack vertically (Top-Bottom)
    direction TB

    %% --- ROW 1: Receiver + Processors ---
    subgraph row1[" "]
        direction LR
        receiver["OTLP Receiver<br/>Port: 4317/4318"]

        subgraph pipeline_in["traces/in Processors"]
            proc1["resourcedetection<br/>(Detect OpenShift)"]
            proc2["k8sattributes<br/>(Add K8s metadata)"]
            proc3["memory_limiter<br/>(Protect memory)"]
            proc4["batch<br/>(Batch traces)"]
        end
    end

    %% --- ROW 2: Connector + Tenant Pipeline ---
    subgraph row2[" "]
        direction LR
        exporter_routing["Exporter: to connector"]

        connector["Export to routing/traces Connector<br/>(Route by namespace)"]

        subgraph pipeline_tenant["Pipeline: traces/tenantA"]
            receiver_tenant["Receiver:<br/>routing/traces"]
            exporter_tenant["Exporter:<br/>otlp/tenantA"]
        end
    end
end

%% --------------------------
%% Column 3: TempoStack
%% --------------------------
subgraph tempo["TempoStack"]
    gateway["Tempo Gateway<br/>(X-Scope-OrgID: tenantA)"]
    storage["S3 Storage<br/>(tenantA traces)"]
end

%% --------------------------
%% Connections
%% --------------------------
app -->|"OTLP traces"| receiver
receiver --> proc1 --> proc2 --> proc3 --> proc4

%% The Wrap: Connect end of Row 1 to start of Row 2
proc4 --> exporter_routing

exporter_routing --> connector
connector -->|"Route by namespace=team-a<br/>→ tenantA"| receiver_tenant
receiver_tenant --> exporter_tenant

exporter_tenant -->|"OTLP + bearer token<br/>Header: X-Scope-OrgID"| gateway
gateway --> storage

%% --------------------------
%% Styles
%% --------------------------
classDef appStyle fill:#2f652a,stroke:#2f652a,stroke-width:2px
classDef receiverStyle fill:#425cc6,stroke:#425cc6,stroke-width:2px
classDef processorStyle fill:#4a90e2,stroke:#4a90e2,stroke-width:2px
classDef connectorStyle fill:#906403,stroke:#906403,stroke-width:2px
classDef exporterStyle fill:#7b4397,stroke:#7b4397,stroke-width:2px
classDef tempoStyle fill:#d35400,stroke:#d35400,stroke-width:2px

class app appStyle
class receiver,receiver_tenant receiverStyle
class proc1,proc2,proc3,proc4 processorStyle
class connector connectorStyle
class exporter_tenant exporterStyle
class exporter_routing exporterStyle
class gateway,storage tempoStyle

%% Hide the structural boxes for the rows so they look seamless
style row1 fill:none,stroke:none
style row2 fill:none,stroke:none

Key Points in the Flow:

  1. Application in team-a namespace sends traces to local collector

  2. Local Collector forwards to Central Collector’s OTLP receiver

  3. Pipeline traces/in processes the traces sequentially:

    • Detects OpenShift environment info

    • Adds Kubernetes metadata (namespace, pod, labels)

    • Applies memory limits

    • Batches traces for efficiency

  4. Routing Connector examines the namespace attribute and routes to the correct tenant pipeline

  5. Pipeline traces/tenantA receives from connector and exports to TempoStack

  6. Exporter adds authentication (bearer token) and tenant ID header (X-Scope-OrgID: tenantA)

  7. TempoStack receives and stores traces in the appropriate tenant storage

This architecture ensures complete isolation between tenants while maintaining a single, centralized collection point.

Complete OpenTelemetry Collector Manifest

Let’s put everything together in the complete OpenTelemetry Collector Manifest. The following defines the Central OpenTelemetry Collector:

  • mode: The deployment mode to use.

  • replicas: The number of replicas to use.

  • serviceAccount: The ServiceAccount to use, created in the previous step.

  • config: The configuration of the OpenTelemetry Collector.

    • receivers: The receivers to use.

    • processors: The processors to use.

    • connectors: The connectors to use.

    • exporters: The exporters to use.

    • extensions: The extensions to use.

    • service: The service to use.

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: tempostack
spec:
  mode: deployment
  replicas: 1
  serviceAccount: otel-collector

  config:
    # Receivers - accept traces from local collectors
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318

    # Processors - enrich and batch traces
    processors:
      # Add Kubernetes metadata
      k8sattributes: {}

      # Detect OpenShift/K8s environment info
      resourcedetection:
        detectors:
          - openshift
        timeout: 2s

      # Memory protection
      memory_limiter:
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15

      # Batch for efficiency
      batch:
        send_batch_size: 10000
        timeout: 10s

    # Connectors - route traces to different pipelines
    connectors:
      routing/traces:
        default_pipelines:
          - traces/Default
        error_mode: ignore
        table:
          # Route team-a namespace to tenantA
          - statement: route() where attributes["k8s.namespace.name"] == "team-a"
            pipelines:
              - traces/tenantA
          # Route team-b namespace to tenantB
          - statement: route() where attributes["k8s.namespace.name"] == "team-b"
            pipelines:
              - traces/tenantB

    # Exporters - send to TempoStack
    exporters:
      # Default tenant exporter
      otlp/Default:
        endpoint: tempo-simplest-gateway:8090
        auth:
          authenticator: bearertokenauth
        headers:
          X-Scope-OrgID: dev
        tls:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          insecure_skip_verify: true
          server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local

      # Tenant A exporter
      otlp/tenantA:
        endpoint: tempo-simplest-gateway:8090
        auth:
          authenticator: bearertokenauth
        headers:
          X-Scope-OrgID: tenantA
        tls:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          insecure_skip_verify: true
          server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local

      # Tenant B exporter
      otlp/tenantB:
        endpoint: tempo-simplest-gateway:8090
        auth:
          authenticator: bearertokenauth
        headers:
          X-Scope-OrgID: tenantB
        tls:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          insecure_skip_verify: true
          server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local

    # Extensions - authentication
    extensions:
      bearertokenauth:
        filename: /var/run/secrets/kubernetes.io/serviceaccount/token

    # Service pipelines
    service:
      extensions:
        - bearertokenauth

      pipelines:
        # Incoming traces pipeline
        traces/in:
          receivers:
            - otlp
          processors:
            - resourcedetection
            - k8sattributes
            - memory_limiter
            - batch
          exporters:
            - routing/traces

        # Default tenant pipeline
        traces/Default:
          receivers:
            - routing/traces
          exporters:
            - otlp/Default

        # Tenant A pipeline
        traces/tenantA:
          receivers:
            - routing/traces
          exporters:
            - otlp/tenantA

        # Tenant B pipeline
        traces/tenantB:
          receivers:
            - routing/traces
          exporters:
            - otlp/tenantB

GitOps Deployment

While the above is good for quick tests, it always makes sense to have a proper GitOps deployment. I have created a Chart and GitOps configuration that will:

  • Deploy the OTEL Operator

  • Configure the the Central Collector instance

The following sources will be used:

Feel free to clone or use whatever you need.

The following Sub-Charts are used:

The following Argo CD Application will deploy the OTEL Operator:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: setup-otel-operator
  namespace: openshift-gitops
spec:
  destination:
    name: in-cluster (1)
    namespace: openshift-opentelemetry-operator (2)
  info:
    - name: Description
      value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
  project: in-cluster (3)
  source:
    path: clusters/management-cluster/setup-otel-operator (4)
    repoURL: 'https://github.com/tjungbauer/openshift-clusterconfig-gitops' (5)
    targetRevision: main
  syncPolicy:
    retry:
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
      limit: 5
1Target cluster, here the local cluster
2Namespace of the target cluster, here the Operator will be installed.
3Project of the target cluster
4Path to the Git repository
5URL to the Git repository

This will create the Argo CD Application that can be synchronized with the cluster:

OTEL Deployment via Argo CD

Complete values file

To see the whole file expand the code:

---
otel: &channel-otel stable
otel-namespace: &otel-namespace openshift-opentelemetry-operator

######################################
# SUBCHART: helper-operator
# Operators that shall be installed.
######################################
helper-operator:
  operators:
    opentelemetry-product:
      enabled: true
      namespace:
        name: *otel-namespace
        create: true
      subscription:
        channel: *channel-otel
        approval: Automatic
        operatorName: opentelemetry-product
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      operatorgroup:
        create: true
        notownnamespace: true

########################################
# SUBCHART: helper-status-checker
# Verify the status of a given operator.
########################################
helper-status-checker:
  enabled: true

  approver: false

  checks:
    - operatorName: opentelemetry-product
      namespace:
        name: *otel-namespace
      syncwave: 1

      serviceAccount:
        name: "status-checker-otel"

rh-build-of-opentelemetry:
  #########################################################################################
  # namespace ... disabled here, since we deployed it via Tempo already
  #########################################################################################
  namespace:
    name: tempostack
    create: false

  #########################################################################################
  # OPENTELEMETRY COLLECTOR - Production Configuration
  #########################################################################################
  collector:
    enabled: true
    name: otel
    mode: deployment
    replicas: 1

    serviceAccount: otel-collector
    managementState: managed

    resources: {}

    tolerations: []

    config:
      connectors:
        routing/traces:
          default_pipelines:
            - traces/Default
          error_mode: ignore
          table:
            - pipelines:
                - traces/tenantA
              statement: 'route() where attributes["k8s.namespace.name"] == "team-a"'
            - pipelines:
                - traces/tenantB
              statement: 'route() where attributes["k8s.namespace.name"] == "team-b"'
            - pipelines:
                - traces/tenantC
              statement: 'route() where attributes["k8s.namespace.name"] == "team-c"'
            - pipelines:
                - traces/tenantD
              statement: 'route() where attributes["k8s.namespace.name"] == "team-d"'
            - pipelines:
                - traces/tenantX
              statement: 'route() where attributes["k8s.namespace.name"] == "mockbin-1"'

      receivers:
        # OTLP receivers for traces, metrics, and logs
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:4317
            http:
              endpoint: 0.0.0.0:4318

        # Jaeger receiver (if migrating from Jaeger)
        jaeger:
          protocols:
            grpc:
              endpoint: 0.0.0.0:14250
            thrift_http:
              endpoint: 0.0.0.0:14268

      processors:
        batch:
          send_batch_size: 10000
          timeout: 10s
        k8sattributes: {}
        memory_limiter:
          check_interval: 1s
          limit_percentage: 75
          spike_limit_percentage: 15
        resourcedetection:
          detectors:
            - openshift
          timeout: 2s

      exporters:
        otlp/tenantX:
          auth:
            authenticator: bearertokenauth
          endpoint: 'tempo-simplest-gateway:8090'
          headers:
            X-Scope-OrgID: tenantX
          tls:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure: true
            insecure_skip_verify: true
            server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
        otlp:
          auth:
            authenticator: bearertokenauth
          endpoint: 'tempo-simplest-gateway:8090'
          headers:
            X-Scope-OrgID: prod
          tls:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure: true
            insecure_skip_verify: true
            server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
        otlp/tenantA:
          auth:
            authenticator: bearertokenauth
          endpoint: 'tempo-simplest-gateway:8090'
          headers:
            X-Scope-OrgID: tenantA
          tls:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure: true
            insecure_skip_verify: true
            server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
        otlp/tenantB:
          auth:
            authenticator: bearertokenauth
          endpoint: 'tempo-simplest-gateway:8090'
          headers:
            X-Scope-OrgID: tenantB
          tls:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure: true
            insecure_skip_verify: true
            server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
        otlp/Default:
          auth:
            authenticator: bearertokenauth
          endpoint: 'tempo-simplest-gateway:8090'
          headers:
            X-Scope-OrgID: dev
          tls:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure: true
            insecure_skip_verify: true
            server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local

      extensions:
        bearertokenauth:
          filename: /var/run/secrets/kubernetes.io/serviceaccount/token

      service:
        extensions:
          - bearertokenauth
        pipelines:
          traces/Default:
            exporters:
              - otlp/Default
            receivers:
              - routing/traces
          traces/in:
            exporters:
              - routing/traces
            processors:
              - resourcedetection
              - k8sattributes
              - memory_limiter
              - batch
            receivers:
              - otlp
          traces/tenantA:
            exporters:
              - otlp/tenantA
            receivers:
              - routing/traces
          traces/tenantB:
            exporters:
              - otlp/tenantB
            receivers:
              - routing/traces

The Central OpenTelemetry Collector

With this deployment a single Pod (because we have set the replica count to 1) is running in the namespace tempostack with the name otel-collector.

oc get pods -n tempostack | grep otel-col
otel-collector-75f8794dc6-rbq26                     1/1     Running     0               52m

Stay Tuned

The next article will cover deploying an example application (Mockbin) and configuring Local OpenTelemetry Collectors in application namespaces to send traces to this Central Collector. We’ll also demonstrate how the namespace-based routing automatically directs traces to the correct TempoStack tenants.