The Hitchhiker's Guide to Observability - Central Collector - Part 3
- - 10 min read
With the architecture defined in Part 1 and TempoStack deployed in Part 2, it’s time to tackle the heart of our distributed tracing system: the Central OpenTelemetry Collector. This is the critical component that sits between your application namespaces and TempoStack, orchestrating trace flow, metadata enrichment, and tenant routing.
In this article, we’ll configure the RBAC permissions required for the Central Collector to enrich traces with Kubernetes metadata and deploy the Central OpenTelemetry Collector with its complete configuration. You’ll learn how to set up receivers for accepting traces from local collectors, configure processors to enrich traces with Kubernetes and OpenShift metadata, and implement routing connectors to direct traces to the appropriate TempoStack tenants based on namespace.
To be honest, this was the most challenging part of the entire setup to get right, as you can easily miss a setting, or misconfigure a part of the configuration. But once you understand the configuration, it becomes straightforward to extend and modify.
Central OpenTelemetry Collector - Step-by-Step Implementation
Prerequisites
Before starting, ensure you have completed the previous parts and have:
Part 1 completed: Hitchhiker’s Guide to Observability with OpenTelemetry and Tempo
Part 2 completed: Deploy Grafana Tempo and TempoStack
OpenShift cluster (v4.20) with Red Hat build of OpenTelemetry Operator installed (v0.135.0-1)
Access Cluster-admin access
| For all configurations I also created a proper GitOps implementation (of course :)). However, first I would like to show the actual configuration. The GitOps implementation can be found at the end of this article. |
Step 1: Verify/Deploy Red Hat Build of OpenTelemetry Operator
Verify if the Operator Red Hat Build of OpenTelemetry is installed and ready. The Operator itself is deployed in the namespace openshift-opentelemetry-operator

Step 2: Configure RBAC for Central Collector
For the OpenTelemetry Collector to function properly with processors like k8sattributes and resourcedetection, it requires cluster-wide read access to Kubernetes resources. Depending on the configuration of the central collector, you might need to configure different RBAC settings to allow the collector to perform specific things. The central collector in our example uses the k8sattributes processor and resourcedetection processor to enrich traces with Kubernetes and OpenShift metadata. These processors require read access to cluster resources.
Why These Permissions Are Needed
The k8sattributes processor enriches telemetry data by querying the Kubernetes API to add metadata such as:
Pod information: Pod name, UID, start time
Namespace details: Namespace name and labels
Deployment context: ReplicaSet and Deployment names
Node information: Node name where the pod is running
The resourcedetection processor detects the OpenShift environment by querying:
Infrastructure resources: Cluster name, platform type, region (from
config.openshift.io/infrastructures)
Without these permissions, traces would lack critical context needed for debugging and analysis.
| Always check the latest documentation of the appropriate component to get the most up to date information. I prefer to use separate ClusterRoles for each processor to keep the permissions as granular as possible. However, that causes some overhead, so you might want to combine the permissions into a single ClusterRole. |
Create ClusterRole for Kubernetes Attributes
The following ClusterRole has been created:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8sattributes-otel-collector
rules:
# Permissions for k8sattributes processor
# Allows the collector to read pod, namespace, and replicaset information
# to enrich traces with Kubernetes metadata
- verbs:
- get
- watch
- list
apiGroups:
- '*'
resources:
- pods
- namespaces
- replicasets
# Permissions for resourcedetection processor (OpenShift)
# Allows detection of OpenShift cluster information
# such as cluster name, platform type, and region
- verbs:
- get
- watch
- list
apiGroups:
- config.openshift.io
resources:
- infrastructures
- infrastructures/statusKey Points:
Read-Only Access: The collector only needs
get,watch, andlistverbs (no write permissions)Cluster-Wide Scope: ClusterRole grants permissions across all namespaces, necessary for monitoring multi-tenant environments
Essential Resources:
pods: Source of trace context (which pod generated the span)namespaces: Namespace metadata and labels for routingreplicasets: Determine the owning Deployment for better trace attribution
OpenShift Infrastructure: Access to
config.openshift.io/infrastructuresallows detection of cluster-level properties
Create ClusterRoleBinding
Our OpenTelemetry Collector will use the ServiceAccount otel-collector (in the namespace tempostack) to read the Kubernetes resources. Thus, we need to create a ClusterRoleBinding to grant the necessary permissions to the ServiceAccount.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8sattributes-collector-tempo
subjects:
- kind: ServiceAccount
name: otel-collector (1)
namespace: tempostack (2)
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8sattributes-otel-collector (3)| 1 | The ServiceAccount that is allowed to write the traces. In this example: otel-collector. |
| 2 | The namespace of the ServiceAccount. In this example: tempostack. |
| 3 | The reference to the ClusterRole |
Security Note
The above permissions in the ClusterRole follow the principle of least privilege:
Only read operations are granted (no create, update, or delete)
Access is limited to specific resource types needed for metadata enrichment
The ServiceAccount otel-collector is dedicated to the collector and not shared with other applications
Step 3: Create ServiceAccount for Central OpenTelemetry Collector
The ServiceAccount used by the OpenTelemetry Collector. This is the service account that will be used to authenticate to the TempoStack instance. Thus, the Bindings created earlier to write into TempoStack and to read from Kubernetes will be required.
| In this article we are installing the central Collector into the same namespace as the TempoStack instance. However, you might want to install it in a different namespace to keep the namespaces separated. Keep an eye on possible Network Policies that might be required to allow the communication between the namespaces. |
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector
namespace: tempostackStep 4: Deploy Central Collector
The central collector receives traces from local collectors, enriches them with Kubernetes metadata, and routes them to appropriate TempoStack tenants. For the sake of simplicity, I have taken snippets from the whole Configuration manifest. At the end of this section you will find the whole manifest.
Basic Configuration
The very basic settings of the OpenTelemetry Collector, are the amount of replicas, the ServiceAccount to use and the deployment mode.
The Collector can be deployed in one of the following modes:
Deployment (default) - Creates a Deployment with the given numbers of replicas.
StatefulSet - Creates a StatefulSet with the given numbers of replicas. Useful for stateful workloads, for example when using the Collector’s File Storage Extension or Tail Sampling Processor.
DaemonSet - Creates a DaemonSet with the given numbers of replicas. Useful for scraping telemetry data from every node, for example by using the Collector’s Filelog Receiver to read container logs.
Sidecar - Injects the Collector as a sidecar into the pod. Useful for accessing log files inside a container, for example by using the Collector’s Filelog Receiver and a shared volume such as emptyDir.
In the examples in this series of articles we will use the modes: deployment and sidecar.
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: tempostack
spec:
mode: deployment (1)
replicas: 1 (2)
serviceAccount: otel-collector (3)
[...]| 1 | The deployment mode to use. |
| 2 | The number of replicas to use. |
| 3 | The ServiceAccount to use, created in the previous step. |
Receivers
The Receivers are the components that receive the traces from the local collectors. Receivers accept data in a specified format and translate it into the internal format. In our example we want to receive the traces from the local collectors. For our tests we are using the oltp Receiver, which is collecting traces, metrics and logs by using the OpenTelemetry Protocol (OTLP).
The easiest configuration is:
receivers:
otlp: (1)
protocols:
grpc:
endpoint: 0.0.0.0:4317 (2)
http:
endpoint: 0.0.0.0:4318 (3)| 1 | The name of the receiver. |
| 2 | The gRPC endpoint to receive. |
| 3 | The http endpoint to receive. |
Besides the otlp receiver, there are other receivers available. For example:
Please check the OpenShift OTEL Receivers documentation for more details. |
Processors
The Processors are the components that process the data after it is received and before it is exported. Processors are completely optional, but they are useful to transform, enrich, or filter traces.
| The order of processors matters. |
The example configuration is using the following processors:
k8sattributes: Can add Kubernetes metadata to the traces.
resourcedetection: Can detect OpenShift/K8s environment info.
memory_limiter: Periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached.
batch: Batches the traces for efficiency. This is a very important processor to improve the performance of the Collector.
| Additional processors are available. Please check the OpenShift OTEL Processors documentation for more details. |
| Some processors will requires additional ClusterRole configuration. |
# Processors - enrich and batch traces
processors:
# Add Kubernetes metadata
k8sattributes: {} (1)
# Detect OpenShift/K8s environment info
resourcedetection: (2)
detectors:
- openshift
timeout: 2s
# Memory protection
memory_limiter: (3)
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
# Batch for efficiency
batch: (4)
send_batch_size: 10000
timeout: 10s| 1 | The k8sattributes processor to add Kubernetes metadata to the traces. |
| 2 | The resourcedetection processor to detect OpenShift/K8s environment info. |
| 3 | The memory_limiter processor to protect the Collector’s memory usage. |
| 4 | The batch processor to batch the traces for efficiency. |
Connectors
A Connector joins two pipelines together. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.
Several Connectors are available, for example:
Count Connector: Counts traces spans, trace span events, metrics, metric data points, and log records.
Routing Connector: Routes the traces to different pipelines.
Forward Connector: Merges two pipelines of the same type.
Spanmetrics Connector: Aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.
The OpenShift OTEL Connectors documentation lists all available Connectors and their configuration options.
In our example we are using the routing Connector, which is able to route the traces to different pipelines based on the namespace. This helps to route the traces to the correct tenant, without the need to configure this in the local OpenTelemetry Collector. (In other words, the project cannot change this setting, because it is configured in the central OpenTelemetry Collector.)
In this example traces from the namespace "team-a" will be routed to the pipeline "tenantA", traces from the namespace "team-b" will be routed to the pipeline "tenantB", and so on.
# Connectors - route traces to different pipelines
connectors:
routing/traces:
default_pipelines: (1)
- traces/Default
error_mode: ignore (2)
table:
# Route team-a namespace to tenantA
- statement: route() where attributes["k8s.namespace.name"] == "team-a" (3)
pipelines:
- traces/tenantA
# Route team-b namespace to tenantB
- statement: route() where attributes["k8s.namespace.name"] == "team-b"
pipelines:
- traces/tenantB
# Route team-c namespace to tenantC
- statement: route() where attributes["k8s.namespace.name"] == "team-c"
pipelines:
- traces/tenantC| 1 | Destination pipelines for routing the telemetry data for which no routing condition is satisfied. |
| 2 | Error-handling mode: Defines how the connector handles routing errors:
|
| 3 | Route traces from namespace team-a to the pipeline tenantA. |
Exporters
The Exporters are the components that export the traces to a destination. Exporters accept data in a specified format and translate it into the destination format. In our example we want to export the traces to the TempoStack instance. The X-Scope-OrgID header is used to identify the tenant and is sent to the TempoStack instance. The authentication is done by using the ServiceAccount token.
Many different Exporters are available. The OpenShift OTEL Exporters documentation lists all available Exporters and their configuration options.
For our tests we are using the otlp Exporter, which will export using the OpenTelemetry Protocol (OTLP):
exporters:
# Tenant A exporter
otlp/tenantA: (1)
endpoint: tempo-simplest-gateway:8090 (2)
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantA (3)
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true (4)
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local (5)| 1 | Protocol/name of the exporter. |
| 2 | The endpoint to export to. Here, the endpoint is the address of the TempoStack instance. |
| 3 | The scope org ID to use. |
| 4 | Whether to skip certificate verification. I did not bother with certificates in this example. |
| 5 | The server name override to use. This was just for testing purposes and can be omitted. |
Extensions
The Extensions extend the Collector capabilities. In our example we are using the bearertokenauth Extension, which is used to authenticate to the TempoStack instance.
# Extensions - authentication
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/tokenService:
Components are enabled by adding them into a Pipeline. If a component is not configured in a pipeline, it is not enabled.
In this example we are using the following pipelines (snippets):
traces/in: The incoming traces pipeline.
traces/tenantA: The tenant A pipeline.
The traces/in pipeline is the incoming traces pipeline. It is used to receive the traces from the local collectors. It leverages the otlp Receiver to receive the traces. It uses the resourcedetection, k8sattributes, memory_limiter, and batch processors to process the traces. And finally, it uses the routing/traces Connector to route the traces.
The routing/traces Connector is used to route the traces to the correct tenant based on the namespace.
The traces/tenantA Pipeline will receive the traces from routing/traces and export the traces to otlp/tenantA which then sends everything to TempoStack to store the traces using the header X-Scope-OrgID: tenantA.
# Service pipelines
service:
extensions:
- bearertokenauth
pipelines:
# Incoming traces pipeline
traces/in: (1)
receivers: (2)
- otlp
processors: (3)
- resourcedetection
- k8sattributes
- memory_limiter
- batch
exporters: (4)
- routing/traces
# Tenant A pipeline
traces/tenantA: (5)
receivers: (6)
- routing/traces
exporters: (7)
- otlp/tenantA| 1 | The incoming traces pipeline. It takes the traces from the otlp receiver. |
| 2 | The receivers to use. |
| 3 | The processors to use. |
| 4 | The exporters to use. It is exporting the data to the routing connector. |
| 5 | The tenant A pipeline. |
| 6 | The receivers to use. |
| 7 | The exporters to use. It is exporting to the exporter, that will send the data to TempoStack. |
The service automatically creates Kubernetes Services for the receivers. The Central Collector will be accessible at otel-collector.tempostack.svc.cluster.local:4317 (gRPC) and :4318 (HTTP) for local collectors to send traces. |
Data Flow Visualization
To better understand how traces flow through the Central Collector, the following Mermaid diagram visualizes the complete journey from application to storage using team-a as an example:
---
title: "Central Collector Data Flow (Wrapped Layout)"
config:
theme: 'dark'
---
flowchart LR
%% --------------------------
%% Column 1: Local Collector
%% --------------------------
subgraph local["(team-a namespace)"]
app["Application<br/>(team-a)"]
end
%% --------------------------
%% Column 2: Central Collector
%% --------------------------
subgraph central["Central Collector (tempostack namespace)"]
%% We force this specific column to stack vertically (Top-Bottom)
direction TB
%% --- ROW 1: Receiver + Processors ---
subgraph row1[" "]
direction LR
receiver["OTLP Receiver<br/>Port: 4317/4318"]
subgraph pipeline_in["traces/in Processors"]
proc1["resourcedetection<br/>(Detect OpenShift)"]
proc2["k8sattributes<br/>(Add K8s metadata)"]
proc3["memory_limiter<br/>(Protect memory)"]
proc4["batch<br/>(Batch traces)"]
end
end
%% --- ROW 2: Connector + Tenant Pipeline ---
subgraph row2[" "]
direction LR
exporter_routing["Exporter: to connector"]
connector["Export to routing/traces Connector<br/>(Route by namespace)"]
subgraph pipeline_tenant["Pipeline: traces/tenantA"]
receiver_tenant["Receiver:<br/>routing/traces"]
exporter_tenant["Exporter:<br/>otlp/tenantA"]
end
end
end
%% --------------------------
%% Column 3: TempoStack
%% --------------------------
subgraph tempo["TempoStack"]
gateway["Tempo Gateway<br/>(X-Scope-OrgID: tenantA)"]
storage["S3 Storage<br/>(tenantA traces)"]
end
%% --------------------------
%% Connections
%% --------------------------
app -->|"OTLP traces"| receiver
receiver --> proc1 --> proc2 --> proc3 --> proc4
%% The Wrap: Connect end of Row 1 to start of Row 2
proc4 --> exporter_routing
exporter_routing --> connector
connector -->|"Route by namespace=team-a<br/>→ tenantA"| receiver_tenant
receiver_tenant --> exporter_tenant
exporter_tenant -->|"OTLP + bearer token<br/>Header: X-Scope-OrgID"| gateway
gateway --> storage
%% --------------------------
%% Styles
%% --------------------------
classDef appStyle fill:#2f652a,stroke:#2f652a,stroke-width:2px
classDef receiverStyle fill:#425cc6,stroke:#425cc6,stroke-width:2px
classDef processorStyle fill:#4a90e2,stroke:#4a90e2,stroke-width:2px
classDef connectorStyle fill:#906403,stroke:#906403,stroke-width:2px
classDef exporterStyle fill:#7b4397,stroke:#7b4397,stroke-width:2px
classDef tempoStyle fill:#d35400,stroke:#d35400,stroke-width:2px
class app appStyle
class receiver,receiver_tenant receiverStyle
class proc1,proc2,proc3,proc4 processorStyle
class connector connectorStyle
class exporter_tenant exporterStyle
class exporter_routing exporterStyle
class gateway,storage tempoStyle
%% Hide the structural boxes for the rows so they look seamless
style row1 fill:none,stroke:none
style row2 fill:none,stroke:noneKey Points in the Flow:
Application in team-a namespace sends traces to local collector
Local Collector forwards to Central Collector’s OTLP receiver
Pipeline traces/in processes the traces sequentially:
Detects OpenShift environment info
Adds Kubernetes metadata (namespace, pod, labels)
Applies memory limits
Batches traces for efficiency
Routing Connector examines the namespace attribute and routes to the correct tenant pipeline
Pipeline traces/tenantA receives from connector and exports to TempoStack
Exporter adds authentication (bearer token) and tenant ID header (X-Scope-OrgID: tenantA)
TempoStack receives and stores traces in the appropriate tenant storage
This architecture ensures complete isolation between tenants while maintaining a single, centralized collection point.
Complete OpenTelemetry Collector Manifest
Let’s put everything together in the complete OpenTelemetry Collector Manifest. The following defines the Central OpenTelemetry Collector:
mode: The deployment mode to use.
replicas: The number of replicas to use.
serviceAccount: The ServiceAccount to use, created in the previous step.
config: The configuration of the OpenTelemetry Collector.
receivers: The receivers to use.
processors: The processors to use.
connectors: The connectors to use.
exporters: The exporters to use.
extensions: The extensions to use.
service: The service to use.
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: tempostack
spec:
mode: deployment
replicas: 1
serviceAccount: otel-collector
config:
# Receivers - accept traces from local collectors
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Processors - enrich and batch traces
processors:
# Add Kubernetes metadata
k8sattributes: {}
# Detect OpenShift/K8s environment info
resourcedetection:
detectors:
- openshift
timeout: 2s
# Memory protection
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
# Batch for efficiency
batch:
send_batch_size: 10000
timeout: 10s
# Connectors - route traces to different pipelines
connectors:
routing/traces:
default_pipelines:
- traces/Default
error_mode: ignore
table:
# Route team-a namespace to tenantA
- statement: route() where attributes["k8s.namespace.name"] == "team-a"
pipelines:
- traces/tenantA
# Route team-b namespace to tenantB
- statement: route() where attributes["k8s.namespace.name"] == "team-b"
pipelines:
- traces/tenantB
# Exporters - send to TempoStack
exporters:
# Default tenant exporter
otlp/Default:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: dev
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Tenant A exporter
otlp/tenantA:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantA
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Tenant B exporter
otlp/tenantB:
endpoint: tempo-simplest-gateway:8090
auth:
authenticator: bearertokenauth
headers:
X-Scope-OrgID: tenantB
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
# Extensions - authentication
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/token
# Service pipelines
service:
extensions:
- bearertokenauth
pipelines:
# Incoming traces pipeline
traces/in:
receivers:
- otlp
processors:
- resourcedetection
- k8sattributes
- memory_limiter
- batch
exporters:
- routing/traces
# Default tenant pipeline
traces/Default:
receivers:
- routing/traces
exporters:
- otlp/Default
# Tenant A pipeline
traces/tenantA:
receivers:
- routing/traces
exporters:
- otlp/tenantA
# Tenant B pipeline
traces/tenantB:
receivers:
- routing/traces
exporters:
- otlp/tenantBGitOps Deployment
While the above is good for quick tests, it always makes sense to have a proper GitOps deployment. I have created a Chart and GitOps configuration that will:
Deploy the OTEL Operator
Configure the the Central Collector instance
The following sources will be used:
Helm Repository - to fetch the Helm Chart for the OTEL Operator including required Sub-Charts.
Setup OTEL Operator - To deploy and configure the OpenTelemetry Operator.
Feel free to clone or use whatever you need.
The following Sub-Charts are used:
helper-operator (version ~1.0.18) - Installs the OpenTelemetry Operator
helper-status-checker (version ~4.0.0) - Verifies the status of the OpenTelemetry Operator
opentelemetry-collector (version ~1.0.0) - Creates the Central OpenTelemetry Collector instance
tpl (version ~1.0.0) - Template Library
The following Argo CD Application will deploy the OTEL Operator:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: setup-otel-operator
namespace: openshift-gitops
spec:
destination:
name: in-cluster (1)
namespace: openshift-opentelemetry-operator (2)
info:
- name: Description
value: ApplicationSet that Deploys on Management Cluster Configuration (using Git Generator)
project: in-cluster (3)
source:
path: clusters/management-cluster/setup-otel-operator (4)
repoURL: 'https://github.com/tjungbauer/openshift-clusterconfig-gitops' (5)
targetRevision: main
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5| 1 | Target cluster, here the local cluster |
| 2 | Namespace of the target cluster, here the Operator will be installed. |
| 3 | Project of the target cluster |
| 4 | Path to the Git repository |
| 5 | URL to the Git repository |
This will create the Argo CD Application that can be synchronized with the cluster:

Complete values file
To see the whole file expand the code:
---
otel: &channel-otel stable
otel-namespace: &otel-namespace openshift-opentelemetry-operator
######################################
# SUBCHART: helper-operator
# Operators that shall be installed.
######################################
helper-operator:
operators:
opentelemetry-product:
enabled: true
namespace:
name: *otel-namespace
create: true
subscription:
channel: *channel-otel
approval: Automatic
operatorName: opentelemetry-product
source: redhat-operators
sourceNamespace: openshift-marketplace
operatorgroup:
create: true
notownnamespace: true
########################################
# SUBCHART: helper-status-checker
# Verify the status of a given operator.
########################################
helper-status-checker:
enabled: true
approver: false
checks:
- operatorName: opentelemetry-product
namespace:
name: *otel-namespace
syncwave: 1
serviceAccount:
name: "status-checker-otel"
rh-build-of-opentelemetry:
#########################################################################################
# namespace ... disabled here, since we deployed it via Tempo already
#########################################################################################
namespace:
name: tempostack
create: false
#########################################################################################
# OPENTELEMETRY COLLECTOR - Production Configuration
#########################################################################################
collector:
enabled: true
name: otel
mode: deployment
replicas: 1
serviceAccount: otel-collector
managementState: managed
resources: {}
tolerations: []
config:
connectors:
routing/traces:
default_pipelines:
- traces/Default
error_mode: ignore
table:
- pipelines:
- traces/tenantA
statement: 'route() where attributes["k8s.namespace.name"] == "team-a"'
- pipelines:
- traces/tenantB
statement: 'route() where attributes["k8s.namespace.name"] == "team-b"'
- pipelines:
- traces/tenantC
statement: 'route() where attributes["k8s.namespace.name"] == "team-c"'
- pipelines:
- traces/tenantD
statement: 'route() where attributes["k8s.namespace.name"] == "team-d"'
- pipelines:
- traces/tenantX
statement: 'route() where attributes["k8s.namespace.name"] == "mockbin-1"'
receivers:
# OTLP receivers for traces, metrics, and logs
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Jaeger receiver (if migrating from Jaeger)
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
processors:
batch:
send_batch_size: 10000
timeout: 10s
k8sattributes: {}
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
resourcedetection:
detectors:
- openshift
timeout: 2s
exporters:
otlp/tenantX:
auth:
authenticator: bearertokenauth
endpoint: 'tempo-simplest-gateway:8090'
headers:
X-Scope-OrgID: tenantX
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp:
auth:
authenticator: bearertokenauth
endpoint: 'tempo-simplest-gateway:8090'
headers:
X-Scope-OrgID: prod
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/tenantA:
auth:
authenticator: bearertokenauth
endpoint: 'tempo-simplest-gateway:8090'
headers:
X-Scope-OrgID: tenantA
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/tenantB:
auth:
authenticator: bearertokenauth
endpoint: 'tempo-simplest-gateway:8090'
headers:
X-Scope-OrgID: tenantB
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
otlp/Default:
auth:
authenticator: bearertokenauth
endpoint: 'tempo-simplest-gateway:8090'
headers:
X-Scope-OrgID: dev
tls:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
insecure: true
insecure_skip_verify: true
server_name_override: tempo-simplest-gateway.tempostack.svc.cluster.local
extensions:
bearertokenauth:
filename: /var/run/secrets/kubernetes.io/serviceaccount/token
service:
extensions:
- bearertokenauth
pipelines:
traces/Default:
exporters:
- otlp/Default
receivers:
- routing/traces
traces/in:
exporters:
- routing/traces
processors:
- resourcedetection
- k8sattributes
- memory_limiter
- batch
receivers:
- otlp
traces/tenantA:
exporters:
- otlp/tenantA
receivers:
- routing/traces
traces/tenantB:
exporters:
- otlp/tenantB
receivers:
- routing/tracesThe Central OpenTelemetry Collector
With this deployment a single Pod (because we have set the replica count to 1) is running in the namespace tempostack with the name otel-collector.
oc get pods -n tempostack | grep otel-col
otel-collector-75f8794dc6-rbq26 1/1 Running 0 52mStay Tuned
The next article will cover deploying an example application (Mockbin) and configuring Local OpenTelemetry Collectors in application namespaces to send traces to this Central Collector. We’ll also demonstrate how the namespace-based routing automatically directs traces to the correct TempoStack tenants.
Copyright © 2020 - 2025 Toni Schmidbauer & Thomas Jungbauer
Thomas Jungbauer
