Customize the OpenTelemetry Collector Configuration
20 minutes
We deployed the Splunk Distribution of the OpenTelemetry Collector in our K8s cluster
using the default configuration. In this section, we’ll walk through several examples
showing how to customize the collector config.
Get the Collector Configuration
Before we customize the collector config, how do we determine what the current configuration
looks like?
In a Kubernetes environment, the collector configuration is stored using a Config Map.
We can see which config maps exist in our cluster with the following command:
kubectl get cm -l app=splunk-otel-collector
NAME DATA AGE
splunk-otel-collector-otel-k8s-cluster-receiver 1 3h37m
splunk-otel-collector-otel-agent 1 3h37m
Why are there two config maps?
We can then view the config map of the collector agent as follows:
kubectl describe cm splunk-otel-collector-otel-agent
Name: splunk-otel-collector-otel-agent
Namespace: default
Labels: app=splunk-otel-collector
app.kubernetes.io/instance=splunk-otel-collector
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=splunk-otel-collector
app.kubernetes.io/version=0.136.1
chart=splunk-otel-collector-0.136.0
helm.sh/chart=splunk-otel-collector-0.136.0
release=splunk-otel-collector
Annotations: meta.helm.sh/release-name: splunk-otel-collector
meta.helm.sh/release-namespace: default
Data====relay:
----
exporters:
otlphttp:
auth:
authenticator: headers_setter
metrics_endpoint: https://ingest.us1.signalfx.com/v2/datapoint/otlp
traces_endpoint: https://ingest.us1.signalfx.com/v2/trace/otlp
(followed by the rest of the collector config in yaml format)
How to Update the Collector Configuration in K8s
In our earlier example running the collector on a Linux instance,
the collector configuration was available in the /etc/otel/collector/agent_config.yaml file. If we
needed to make changes to the collector config in that case, we’d simply edit this file,
save the changes, and then restart the collector.
In K8s, things work a bit differently. Instead of modifying the agent_config.yaml directly, we’ll
instead customize the collector configuration by making changes to the values.yaml file used to deploy
the helm chart.
The values.yaml file in GitHub
describes the customization options that are available to us.
Let’s look at an example.
Add Infrastructure Events Monitoring
For our first example, let’s enable infrastructure events monitoring for our K8s cluster.
This will allow us to see Kubernetes events as part of the Events Feed section in charts.
The cluster receiver will be configured with a Smart Agent receiver using the kubernetes-events
monitor to send custom events. See Collect Kubernetes events
for further details.
This is done by adding the following line to the values.yaml file:
Hint: steps to open and save in vi are in previous steps.
Release "splunk-otel-collector" has been upgraded. Happy Helming!
NAME: splunk-otel-collector
LAST DEPLOYED: Fri Dec 20 01:17:03 2024NAMESPACE: default
STATUS: deployed
REVISION: 2TEST SUITE: None
NOTES:
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
We can then view the config map and ensure the changes were applied:
kubectl describe cm splunk-otel-collector-otel-k8s-cluster-receiver
Ensure smartagent/kubernetes-events is included in the agent config now:
smartagent/kubernetes-events:
alwaysClusterReporter: true type: kubernetes-events
whitelistedEvents:
- involvedObjectKind: Pod
reason: Created
- involvedObjectKind: Pod
reason: Unhealthy
- involvedObjectKind: Pod
reason: Failed
- involvedObjectKind: Job
reason: FailedCreate
Note that we specified the cluster receiver config map since that’s
where these particular changes get applied.
Add the Debug Exporter
Suppose we want to see the traces and logs that are sent to the collector, so we can
inspect them before sending them to Splunk. We can use the debug exporter for this purpose, which
can be helpful for troubleshooting OpenTelemetry-related issues.
Let’s add the debug exporter to the bottom of the values.yaml file as follows:
Release "splunk-otel-collector" has been upgraded. Happy Helming!
NAME: splunk-otel-collector
LAST DEPLOYED: Fri Dec 20 01:32:03 2024NAMESPACE: default
STATUS: deployed
REVISION: 3TEST SUITE: None
NOTES:
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
Exercise the application a few times using curl, then tail the agent collector logs with the
following command:
kubectl logs -l component=otel-collector-agent -f
You should see traces written to the agent collector logs such as the following: