Practice setting up the OpenTelemetry Collector configuration from scratch and go though several advanced configuration scenarios's.
Subsections of OpenTelemetry Collector Workshops
Making Your Observability Cloud Native With OpenTelemetry
1 hourAuthor
Robert Castley
Abstract
Organizations getting started with OpenTelemetry may begin by sending data directly to an observability backend. While this works well for initial testing, using the OpenTelemetry collector as part of your observability architecture provides numerous benefits and is recommended for any production deployment.
In this workshop, we will be focusing on using the OpenTelemetry collector and starting with the fundamentals of configuring the receivers, processors, and exporters ready to use with Splunk Observability Cloud. The journey will take attendees from novices to being able to start adding custom components to help solve for their business observability needs for their distributed platform.
Ninja Sections
Throughout the workshop there will be expandable Ninja Sections, these will be more hands on and go into further technical detail that you can explore within the workshop or in your own time.
Please note that the content in these sections may go out of date due to the frequent development being made to the OpenTelemetry project. Links will be provided in the event details are out of sync, please let us know if you spot something that needs updating.
Ninja: Test Me!
By completing this workshop you will officially be an OpenTelemetry Collector Ninja!
Target Audience
This interactive workshop is for developers and system administrators who are interested in learning more about architecture and deployment of the OpenTelemetry Collector.
Prerequisites
Attendees should have a basic understanding of data collection
Command line and vim/vi experience.
A instance/host/VM running Ubuntu 20.04 LTS or 22.04 LTS.
Minimum requirements are an AWS/EC2 t2.micro (1 CPU, 1GB RAM, 8GB Storage)
Learning Objectives
By the end of this talk, attendees will be able to:
Understand the components of OpenTelemetry
Use receivers, processors, and exporters to collect and analyze data
Identify the benefits of using OpenTelemetry
Build a custom component to solve their business needs
Download the OpenTelemetry Collector Contrib distribution
The first step in installing the Open Telemetry Collector is downloading it. For our lab, we will use the wget command to download the .deb package from the OpenTelemetry Github repository.
Selecting previously unselected package otelcol-contrib.
(Reading database ... 89232 files and directories currently installed.)
Preparing to unpack otelcol-contrib_0.111.0_linux_amd64.deb ...
Unpacking otelcol-contrib (0.111.0) ...
Setting up otelcol-contrib (0.111.0) ...
Created symlink /etc/systemd/system/multi-user.target.wants/otelcol-contrib.service → /lib/systemd/system/otelcol-contrib.service.
Subsections of 1. Installation
Installing OpenTelemetry Collector Contrib
Confirm the Collector is running
The collector should now be running. We will verify this as root using systemctl command. To exit the status just press q.
sudo systemctl status otelcol-contrib
● otelcol-contrib.service - OpenTelemetry Collector Contrib
Loaded: loaded (/lib/systemd/system/otelcol-contrib.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-10-07 10:27:49 BST; 52s ago
Main PID: 17113 (otelcol-contrib)
Tasks: 13 (limit: 19238)
Memory: 34.8M
CPU: 155ms
CGroup: /system.slice/otelcol-contrib.service
└─17113 /usr/bin/otelcol-contrib --config=/etc/otelcol-contrib/config.yaml
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: Descriptor:
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: -> Name: up
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: -> Description: The scraping was successful
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: -> Unit:
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: -> DataType: Gauge
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: NumberDataPoints #0
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: Timestamp: 2024-10-07 09:28:36.942 +0000 UTC
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: Value: 1.000000
Oct 07 10:28:36 petclinic-rum-testing otelcol-contrib[17113]: {"kind": "exporter", "data_type": "metrics", "name": "debug"}
Because we will be making multiple configuration file changes, setting environment variables and restarting the collector, we need to stop the collector service and disable it from starting on boot.
An alternative approach would be to use the golang tool chain to build the binary locally by doing:
go install go.opentelemetry.io/collector/cmd/builder@v0.80.0
mv $(go env GOPATH)/bin/builder /usr/bin/ocb
(Optional) Docker
Why build your own collector?
The default distribution of the collector (core and contrib) either contains too much or too little in what they have to offer.
It is also not advised to run the contrib collector in your production environments due to the amount of components installed which more than likely are not needed by your deployment.
Benefits of building your own collector?
When creating your own collector binaries, (commonly referred to as distribution), means you build what you need.
The benefits of this are:
Smaller sized binaries
Can use existing go scanners for vulnerabilities
Include internal components that can tie in with your organization
Considerations for building your collector?
Now, this would not be a 🥷 Ninja zone if it didn’t come with some drawbacks:
Go experience is recommended if not required
No Splunk support
Responsibility for distribution and lifecycle management
It is important to note that the project is working towards stability but it does not mean changes made will not break your workflow. The team at Splunk provides increased support and a higher level of stability so they can provide a curated experience helping you with your deployment needs.
The Ninja Zone
Once you have all the required tools installed to get started, you will need to create a new file named otelcol-builder.yaml and we will follow this directory structure:
.
└── otelcol-builder.yaml
Once we have the file created, we need to add a list of components for it to install with some additional metadata.
For this example, we are going to create a builder manifest that will install only the components we need for the introduction config:
OpenTelemetry is configured through YAML files. These files have default configurations that we can modify to meet our needs. Let’s look at the default configuration that is supplied:
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:exporters:debug:verbosity:detailedservice:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[otlp, opencensus, prometheus]processors:[batch]exporters:[debug]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
Congratulations! You have successfully downloaded and installed the OpenTelemetry Collector. You are well on your way to becoming an OTel Ninja. But first let’s walk through configuration files and different distributions of the OpenTelemetry Collector.
Note
Splunk does provide its own, fully supported, distribution of the OpenTelemetry Collector. This distribution is available to install from the Splunk GitHub Repository or via a wizard in Splunk Observability Cloud that will build out a simple installation script to copy and paste. This distribution includes many additional features and enhancements that are not available in the OpenTelemetry Collector Contrib distribution.
The Splunk Distribution of the OpenTelemetry Collector is production-tested; it is in use by the majority of customers in their production environments.
Customers that use our distribution can receive direct help from official Splunk support within SLAs.
Customers can use or migrate to the Splunk Distribution of the OpenTelemetry Collector without worrying about future breaking changes to its core configuration experience for metrics and traces collection (OpenTelemetry logs collection configuration is in beta). There may be breaking changes to the Collector’s metrics.
We will now walk through each section of the configuration file and modify it to send host metrics to Splunk Observability Cloud.
OpenTelemetry Collector Extensions
Now that we have the OpenTelemetry Collector installed, let’s take a look at extensions for the OpenTelemetry Collector. Extensions are optional and available primarily for tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding.
Extensions are configured in the same config.yaml file that we referenced in the installation step. Let’s edit the config.yaml file and configure the extensions. Note that the pprof and zpages extensions are already configured in the default config.yaml file. For the purpose of this workshop, we will only be updating the health_check extension to expose the port on all network interfaces on which we can access the health of the collector.
This extension enables an HTTP URL that can be probed to check the status of the OpenTelemetry Collector. This extension can be used as a liveness and/or readiness probe on Kubernetes. To learn more about the curl command, check out the curl man page.
Open a new terminal session and SSH into your instance to run the following command:
Performance Profiler extension enables the golang net/http/pprof endpoint. This is typically used by developers to collect performance profiles and investigate issues with the service. We will not be covering this in this workshop.
OpenTelemetry Collector Extensions
zPages
zPages are an in-process alternative to external exporters. When included, they collect and aggregate tracing and metrics information in the background; this data is served on web pages when requested. zPages are an extremely useful diagnostic feature to ensure the collector is running as expected.
ServiceZ gives an overview of the collector services and quick access to the pipelinez, extensionz, and featurez zPages. The page also provides build and runtime information.
PipelineZ provides insights on the running pipelines running in the collector. You can find information on type, if data is mutated, and you can also see information on the receivers, processors and exporters that are used for each pipeline.
Ninja: Improve data durability with storage extension
For this, we will need to validate that our distribution has the file_storage extension installed. This can be done by running the command otelcol-contrib components which should show results like:
# ... truncated for clarityextensions:- file_storage
This extension provides exporters the ability to queue data to disk in the event that exporter is unable to send data to the configured endpoint.
In order to configure the extension, you will need to update your config to include the information below. First, be sure to create a /tmp/otel-data directory and give it read/write permissions:
extensions:...file_storage:directory:/tmp/otel-datatimeout:10scompaction:directory:/tmp/otel-dataon_start:trueon_rebound:truerebound_needed_threshold_mib:5rebound_trigger_threshold_mib:3# ... truncated for clarityservice:extensions:[health_check, pprof, zpages, file_storage]
Why queue data to disk?
This allows the collector to weather network interruptions (and even collector restarts) to ensure data is sent to the upstream provider.
Considerations for queuing data to disk?
There is a potential that this could impact data throughput performance due to disk performance.
# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:endpoint:0.0.0.0:13133pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:exporters:debug:verbosity:detailedservice:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[otlp, opencensus, prometheus]processors:[batch]exporters:[debug]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
Now that we have reviewed extensions, let’s dive into the data pipeline portion of the workshop. A pipeline defines a path the data follows in the Collector starting from reception, moving to further processing or modification, and finally exiting the Collector via exporters.
The data pipeline in the OpenTelemetry Collector is made up of receivers, processors, and exporters. We will first start with receivers.
OpenTelemetry Collector Receivers
Welcome to the receiver portion of the workshop! This is the starting point of the data pipeline of the OpenTelemetry Collector. Let’s dive in.
A receiver, which can be push or pull based, is how data gets into the Collector. Receivers may support one or more data sources. Generally, a receiver accepts data in a specified format, translates it into the internal format and passes it to processors and exporters defined in the applicable pipelines.
The Host Metrics Receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent which is what we will be doing in this workshop.
Let’s update the /etc/otel-contrib/config.yaml file and configure the hostmetrics receiver. Insert the following YAML under the receivers section, taking care to indent by two spaces.
sudo vi /etc/otelcol-contrib/config.yaml
receivers:hostmetrics:collection_interval:10sscrapers:# CPU utilization metricscpu:# Disk I/O metricsdisk:# File System utilization metricsfilesystem:# Memory utilization metricsmemory:# Network interface I/O metrics & TCP connection metricsnetwork:# CPU load metricsload:# Paging/Swap space utilization and I/O metricspaging:# Process count metricsprocesses:# Per process CPU, Memory and Disk I/O metrics. Disabled by default.# process:
OpenTelemetry Collector Receivers
Prometheus Receiver
You will also notice another receiver called prometheus. Prometheus is an open-source toolkit used by the OpenTelemetry Collector. This receiver is used to scrape metrics from the OpenTelemetry Collector itself. These metrics can then be used to monitor the health of the collector.
Let’s modify the prometheus receiver to clearly show that it is for collecting metrics from the collector itself. By changing the name of the receiver from prometheus to prometheus/internal, it is now much clearer as to what that receiver is doing. Update the configuration file to look like this:
The following screenshot shows an example dashboard of some of the metrics the Prometheus internal receiver collects from the OpenTelemetry Collector. Here, we can see accepted and sent spans, metrics and log records.
Note
The following screenshot is an out-of-the-box (OOTB) dashboard from Splunk Observability Cloud that allows you to easily monitor your Splunk OpenTelemetry Collector install base.
OpenTelemetry Collector Receivers
Other Receivers
You will notice in the default configuration there are other receivers: otlp, opencensus, jaeger and zipkin. These are used to receive telemetry data from other sources. We will not be covering these receivers in this workshop and they can be left as they are.
Ninja: Create receivers dynamically
To help observe short lived tasks like docker containers, kubernetes pods, or ssh sessions, we can use the receiver creator with observer extensions to create a new receiver as these services start up.
What do we need?
In order to start using the receiver creator and its associated observer extensions, they will need to be part of your collector build manifest.
Some short lived tasks may require additional configuration such as username, and password.
These values can be referenced via environment variables,
or use a scheme expand syntax such as ${file:./path/to/database/password}.
Please adhere to your organisation’s secret practices when taking this route.
The Ninja Zone
There are only two things needed for this ninja zone:
Make sure you have added receiver creater and observer extensions to the builder manifest.
Create the config that can be used to match against discovered endpoints.
To create the templated configurations, you can do the following:
receiver_creator:watch_observers:[host_observer]receivers:redis:rule:type == "port" && port == 6379config:password:${env:HOST_REDIS_PASSWORD}
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:endpoint:0.0.0.0:13133pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:hostmetrics:collection_interval:10sscrapers:# CPU utilization metricscpu:# Disk I/O metricsdisk:# File System utilization metricsfilesystem:# Memory utilization metricsmemory:# Network interface I/O metrics & TCP connection metricsnetwork:# CPU load metricsload:# Paging/Swap space utilization and I/O metricspaging:# Process count metricsprocesses:# Per process CPU, Memory and Disk I/O metrics. Disabled by default.# process:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus/internal:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:exporters:debug:verbosity:detailedservice:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[otlp, opencensus, prometheus]processors:[batch]exporters:[debug]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
Now that we have reviewed how data gets into the OpenTelemetry Collector through receivers, let’s now take a look at how the Collector processes the received data.
Warning
As the /etc/otelcol-contrib/config.yaml is not complete, please do not attempt to restart the collector at this point.
OpenTelemetry Collector Processors
Processors are run on data between being received and being exported. Processors are optional though some are recommended. There are a large number of processors included in the OpenTelemetry contrib Collector.
By default, only the batch processor is enabled. This processor is used to batch up data before it is exported. This is useful for reducing the number of network calls made to exporters. For this workshop, we will inherit the following defaults which are hard-coded into the Collector:
send_batch_size (default = 8192): Number of spans, metric data points, or log records after which a batch will be sent regardless of the timeout. send_batch_size acts as a trigger and does not affect the size of the batch. If you need to enforce batch size limits sent to the next component in the pipeline see send_batch_max_size.
timeout (default = 200ms): Time duration after which a batch will be sent regardless of size. If set to zero, send_batch_size is ignored as data will be sent immediately, subject to only send_batch_max_size.
send_batch_max_size (default = 0): The upper limit of the batch size. 0 means no upper limit on the batch size. This property ensures that larger batches are split into smaller units. It must be greater than or equal to send_batch_size.
The resourcedetection processor can be used to detect resource information from the host and append or override the resource value in telemetry data with this information.
By default, the hostname is set to the FQDN if possible, otherwise, the hostname provided by the OS is used as a fallback. This logic can be changed from using using the hostname_sources configuration option. To avoid getting the FQDN and use the hostname provided by the OS, we will set the hostname_sources to os.
If the workshop instance is running on an AWS/EC2 instance we can gather the following tags from the EC2 metadata API (this is not available on other platforms).
cloud.provider ("aws")
cloud.platform ("aws_ec2")
cloud.account.id
cloud.region
cloud.availability_zone
host.id
host.image.id
host.name
host.type
We will create another processor to append these tags to our metrics.
The attributes processor modifies attributes of a span, log, or metric. This processor also supports the ability to filter and match input data to determine if they should be included or excluded for specified actions.
It takes a list of actions that are performed in the order specified in the config. The supported actions are:
insert: Inserts a new attribute in input data where the key does not already exist.
update: Updates an attribute in input data where the key does exist.
upsert: Performs insert or update. Inserts a new attribute in input data where the key does not already exist and updates an attribute in input data where the key does exist.
delete: Deletes an attribute from the input data.
hash: Hashes (SHA1) an existing attribute value.
extract: Extracts values using a regular expression rule from the input key to target keys specified in the rule. If a target key already exists, it will be overridden.
We are going to create an attributes processor to insert a new attribute to all our host metrics called participant.name with a value of your name e.g. marge_simpson.
Warning
Ensure you replace INSERT_YOUR_NAME_HERE with your name and also ensure you do not use spaces in your name.
Later on in the workshop, we will use this attribute to filter our metrics in Splunk Observability Cloud.
One of the most recent additions to the collector was the notion of a connector, which allows you to join the output of one pipeline to the input of another pipeline.
An example of how this is beneficial is that some services emit metrics based on the amount of datapoints being exported, the number of logs containing an error status,
or the amount of data being sent from one deployment environment. The count connector helps address this for you out of the box.
Why a connector instead of a processor?
A processor is limited in what additional data it can produce considering it has to pass on the data it has processed making it hard to expose additional information. Connectors do not have to emit the data they receive which means they provide an opportunity to create the insights we are after.
For example, a connector could be made to count the number of logs, metrics, and traces that do not have the deployment environment attribute.
A very simple example with the output of being able to break down data usage by deployment environment.
Considerations with connectors
A connector only accepts data exported from one pipeline and receiver by another pipeline, this means you may have to consider how you construct your collector config to take advantage of it.
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:endpoint:0.0.0.0:13133pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:hostmetrics:collection_interval:10sscrapers:# CPU utilization metricscpu:# Disk I/O metricsdisk:# File System utilization metricsfilesystem:# Memory utilization metricsmemory:# Network interface I/O metrics & TCP connection metricsnetwork:# CPU load metricsload:# Paging/Swap space utilization and I/O metricspaging:# Process count metricsprocesses:# Per process CPU, Memory and Disk I/O metrics. Disabled by default.# process:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus/internal:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:resourcedetection/system:detectors:[system]system:hostname_sources:[os]resourcedetection/ec2:detectors:[ec2]attributes/conf:actions:- key:participant.nameaction:insertvalue:"INSERT_YOUR_NAME_HERE"exporters:debug:verbosity:detailedservice:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[otlp, opencensus, prometheus]processors:[batch]exporters:[debug]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
OpenTelemetry Collector Exporters
An exporter, which can be push or pull-based, is how you send data to one or more backends/destinations. Exporters may support one or more data sources.
For this workshop, we will be using the otlphttp exporter. The OpenTelemetry Protocol (OTLP) is a vendor-neutral, standardised protocol for transmitting telemetry data. The OTLP exporter sends data to a server that implements the OTLP protocol. The OTLP exporter supports both gRPC and HTTP/JSON protocols.
To send metrics over HTTP to Splunk Observability Cloud, we will need to configure the otlphttp exporter.
Let’s edit our /etc/otelcol-contrib/config.yaml file and configure the otlphttp exporter. Insert the following YAML under the exporters section, taking care to indent by two spaces e.g.
We will also change the verbosity of the logging exporter to prevent the disk from filling up. The default of detailed is very noisy.
Next, we need to define the metrics_endpoint and configure the target URL.
Note
If you are an attendee at a Splunk-hosted workshop, the instance you are using has already been configured with a Realm environment variable. We will reference that environment variable in our configuration file. Otherwise, you will need to create a new environment variable and set the Realm e.g.
exportREALM="us1"
The URL to use is https://ingest.${env:REALM}.signalfx.com/v2/datapoint/otlp. (Splunk has Realms in key geographical locations around the world for data residency).
The otlphttp exporter can also be configured to send traces and logs by defining a target URL for traces_endpoint and logs_endpoint respectively. Configuring these is outside the scope of this workshop.
By default, gzip compression is enabled for all endpoints. This can be disabled by setting compression: none in the exporter configuration. We will leave compression enabled for this workshop and accept the default as this is the most efficient way to send data.
To send metrics to Splunk Observability Cloud, we need to use an Access Token. This can be done by creating a new token in the Splunk Observability Cloud UI. For more information on how to create a token, see Create a token. The token needs to be of type INGEST.
Note
If you are an attendee at a Splunk-hosted workshop, the instance you are using has already been configured with an Access Token (which has been set as an environment variable). We will reference that environment variable in our configuration file. Otherwise, you will need to create a new token and set it as an environment variable e.g.
exportACCESS_TOKEN=<replace-with-your-token>
The token is defined in the configuration file by inserting X-SF-TOKEN: ${env:ACCESS_TOKEN} under a headers: section:
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:endpoint:0.0.0.0:13133pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:hostmetrics:collection_interval:10sscrapers:# CPU utilization metricscpu:# Disk I/O metricsdisk:# File System utilization metricsfilesystem:# Memory utilization metricsmemory:# Network interface I/O metrics & TCP connection metricsnetwork:# CPU load metricsload:# Paging/Swap space utilization and I/O metricspaging:# Process count metricsprocesses:# Per process CPU, Memory and Disk I/O metrics. Disabled by default.# process:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus/internal:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:resourcedetection/system:detectors:[system]system:hostname_sources:[os]resourcedetection/ec2:detectors:[ec2]attributes/conf:actions:- key:participant.nameaction:insertvalue:"INSERT_YOUR_NAME_HERE"exporters:debug:verbosity:normalotlphttp/splunk:metrics_endpoint:https://ingest.${env:REALM}.signalfx.com/v2/datapoint/otlpheaders:X-SF-Token:${env:ACCESS_TOKEN}service:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[otlp, opencensus, prometheus]processors:[batch]exporters:[debug]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
Of course, you can easily configure the metrics_endpoint to point to any other solution that supports the OTLP protocol.
Next, we need to enable the receivers, processors and exporters we have just configured in the service section of the config.yaml.
OpenTelemetry Collector Service
The Service section is used to configure what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections.
Info
If a component is configured, but not defined within the Service section then it is not enabled.
The service section consists of three sub-sections:
extensions
pipelines
telemetry
In the default configuration, the extension section has been configured to enable health_check, pprof and zpages, which we configured in the Extensions module earlier.
service:extensions:[health_check, pprof, zpages]
So lets configure our Metric Pipeline!
Subsections of 6. Service
OpenTelemetry Collector Service
Hostmetrics Receiver
If you recall from the Receivers portion of the workshop, we defined the Host Metrics Receiver to generate metrics about the host system, which are scraped from various sources. To enable the receiver, we must include the hostmetrics receiver in the metrics pipeline.
In the metrics pipeline, add hostmetrics to the metrics receivers section.
Earlier in the workshop, we also renamed the prometheus receiver to reflect that is was collecting metrics internal to the collector, renaming it to prometheus/internal.
We now need to enable the prometheus/internal receiver under the metrics pipeline. Update the receivers section to include prometheus/internal under the metrics pipeline:
We also added resourcedetection/system and resourcedetection/ec2 processors so that the collector can capture the instance hostname and AWS/EC2 metadata. We now need to enable these two processors under the metrics pipeline.
Update the processors section to include resourcedetection/system and resourcedetection/ec2 under the metrics pipeline:
Also in the Processors section of this workshop, we added the attributes/conf processor so that the collector will insert a new attribute called participant.name to all the metrics. We now need to enable this under the metrics pipeline.
Update the processors section to include attributes/conf under the metrics pipeline:
In the Exporters section of the workshop, we configured the otlphttp exporter to send metrics to Splunk Observability Cloud. We now need to enable this under the metrics pipeline.
Update the exporters section to include otlphttp/splunk under the metrics pipeline:
The collector captures internal signals about its behavior this also includes additional signals from running components.
The reason for this is that components that make decisions about the flow of data need a way to surface that information
as metrics or traces.
Why monitor the collector?
This is somewhat of a chicken and egg problem of, “Who is watching the watcher?”, but it is important that we can surface this information. Another interesting part of the collector’s history is that it existed before the Go metrics’ SDK was considered stable so the collector exposes a Prometheus endpoint to provide this functionality for the time being.
Considerations
Monitoring the internal usage of each running collector in your organization can contribute a significant amount of new Metric Time Series (MTS). The Splunk distribution has curated these metrics for you and would be able to help forecast the expected increases.
The Ninja Zone
To expose the internal observability of the collector, some additional settings can be adjusted:
service:telemetry:logs:level:<info|warn|error>development:<true|false>encoding:<console|json>disable_caller:<true|false>disable_stacktrace:<true|false>output_paths:[<stdout|stderr>, paths...]error_output_paths:[<stdout|stderr>, paths...]initial_fields:key:valuemetrics:level:<none|basic|normal|detailed># Address binds the promethues endpoint to scrapeaddress:<hostname:port>
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacksextensions:health_check:endpoint:0.0.0.0:13133pprof:endpoint:0.0.0.0:1777zpages:endpoint:0.0.0.0:55679receivers:hostmetrics:collection_interval:10sscrapers:# CPU utilization metricscpu:# Disk I/O metricsdisk:# File System utilization metricsfilesystem:# Memory utilization metricsmemory:# Network interface I/O metrics & TCP connection metricsnetwork:# CPU load metricsload:# Paging/Swap space utilization and I/O metricspaging:# Process count metricsprocesses:# Per process CPU, Memory and Disk I/O metrics. Disabled by default.# process:otlp:protocols:grpc:endpoint:0.0.0.0:4317http:endpoint:0.0.0.0:4318opencensus:endpoint:0.0.0.0:55678# Collect own metricsprometheus/internal:config:scrape_configs:- job_name:'otel-collector'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']jaeger:protocols:grpc:endpoint:0.0.0.0:14250thrift_binary:endpoint:0.0.0.0:6832thrift_compact:endpoint:0.0.0.0:6831thrift_http:endpoint:0.0.0.0:14268zipkin:endpoint:0.0.0.0:9411processors:batch:resourcedetection/system:detectors:[system]system:hostname_sources:[os]resourcedetection/ec2:detectors:[ec2]attributes/conf:actions:- key:participant.nameaction:insertvalue:"INSERT_YOUR_NAME_HERE"exporters:debug:verbosity:normalotlphttp/splunk:metrics_endpoint:https://ingest.${env:REALM}.signalfx.com/v2/datapoint/otlpheaders:X-SF-Token:${env:ACCESS_TOKEN}service:pipelines:traces:receivers:[otlp, opencensus, jaeger, zipkin]processors:[batch]exporters:[debug]metrics:receivers:[hostmetrics, otlp, opencensus, prometheus/internal]processors:[batch, resourcedetection/system, resourcedetection/ec2, attributes/conf]exporters:[debug, otlphttp/splunk]logs:receivers:[otlp]processors:[batch]exporters:[debug]extensions:[health_check, pprof, zpages]
Tip
It is recommended that you validate your configuration file before restarting the collector. You can do this by pasting the contents of your config.yaml file into otelbin.io.
ScreenshotOTelBin
Now that we have a working configuration, let’s start the collector and then check to see what zPages is reporting.
Now that we have configured the OpenTelemetry Collector to send metrics to Splunk Observability Cloud, let’s take a look at the data in Splunk Observability Cloud. If you have not received an invite to Splunk Observability Cloud, your instructor will provide you with login credentials.
Before that, let’s make things a little more interesting and run a stress test on the instance. This in turn will light up the dashboards.
Once you are logged into Splunk Observability Cloud, using the left-hand navigation, navigate to Dashboards from the main menu. This will take you to the Teams view. At the top of this view click on All Dashboards :
In the search box, search for OTel Contrib:
Info
If the dashboard does not exist, then your instructor will be able to quickly add it. If you are not attending a Splunk hosted version of this workshop then the Dashboard Group to import can be found at the bottom of this page.
Click on the OTel Contrib Dashboard dashboard to open it, next click in the Participant Name box, at the top of the dashboard, and select the name you configured for participant.name in the config.yaml in the drop-down list or start typing the name to search for it:
You can now see the host metrics for the host upon which you configured the OpenTelemetry Collector.
Building a component for the Open Telemetry Collector requires three key parts:
The Configuration - What values are exposed to the user to configure
The Factory - Make the component using the provided values
The Business Logic - What the component needs to do
For this, we will use the example of building a component that works with Jenkins so that we can track important DevOps metrics of our project(s).
The metrics we are looking to measure are:
Lead time for changes - “How long it takes for a commit to get into production”
Change failure rate - “The percentage of deployments causing a failure in production”
Deployment frequency - “How often a [team] successfully releases to production”
Mean time to recover - “How long does it take for a [team] to recover from a failure in production”
These indicators were identified Google’s DevOps Research and Assesment (DORA)[^1] team to help
show performance of a software development team. The reason for choosing Jenkins CI is that we remain in the same Open Source Software ecosystem which we can serve as the example for the vendor managed CI tools to adopt in future.
Instrument Vs Component
There is something to consider when improving level of Observability within your organisation
since there are some trade offs that get made.
Pros
Cons
(Auto) Instrumented
Does not require an external API to be monitored in order to observe the system.
Changing instrumentation requires changes to the project.
Gives system owners/developers to make changes in their observability.
Requires additional runtime dependancies.
Understands system context and can corrolate captured data with Exemplars.
Can impact performance of the system.
Component
- Changes to data names or semantics can be rolled out independently of the system’s release cycle.
Breaking API changes require a coordinated release between system and collector.
Updating/extending data collected is a seemless user facing change.
Captured data semantics can unexpectedly break that does not align with a new system release.
Does not require the supporting teams to have a deep understanding of observability practice.
Strictly external / exposed information can be surfaced from the system.
Subsections of 8. Develop
OpenTelemetry Collector Development
Project Setup Ninja
Note
The time to finish this section of the workshop can vary depending on experience.
A complete solution can be found here in case you’re stuck or want to follow
along with the instructor.
To get started developing the new Jenkins CI receiver, we first need to set up a Golang project.
The steps to create your new Golang project is:
Create a new directory named ${HOME}/go/src/jenkinscireceiver and change into it
The actual directory name or location is not strict, you can choose your own development directory to make it in.
Initialize the golang module by going go mod init splunk.conf/workshop/example/jenkinscireceiver
This will create a file named go.mod which is used to track our direct and indirect dependencies
Eventually, there will be a go.sum which is the checksum value of the dependencies being imported.
Check-inReview your go.mod
module splunk.conf/workshop/example/jenkinscireceiver
go 1.20
OpenTelemetry Collector Development
Building The Configuration
The configuration portion of the component is how the user is able to have their inputs over the component,
so the values that is used for the configuration need to be:
Intuitive for users to understand what that field controls
Be explicit in what is required and what is optional
The bad configuration highlights how doing the opposite of the recommendations of configuration practices impacts the usability
of the component. It doesn’t make it clear what field values should be, it includes features that can be pushed to existing processors,
and the field naming is not consistent with other components that exist in the collector.
The good configuration keeps the required values simple, reuses field names from other components, and ensures the component focuses on
just the interaction between Jenkins and the collector.
The code tab shows how much is required to be added by us and what is already provided for us by shared libraries within the collector.
These will be explained in more detail once we get to the business logic. The configuration should start off small and will change
once the business logic has started to include additional features that is needed.
Write the code
In order to implement the code needed for the configuration, we are going to create a new file named config.go with the following content:
packagejenkinscireceiverimport("go.opentelemetry.io/collector/config/confighttp""go.opentelemetry.io/collector/receiver/scraperhelper""splunk.conf/workshop/example/jenkinscireceiver/internal/metadata")typeConfigstruct{// HTTPClientSettings contains all the values// that are commonly shared across all HTTP interactions// performed by the collector.confighttp.HTTPClientSettings`mapstructure:",squash"`// ScraperControllerSettings will allow us to schedule // how often to check for updates to builds.scraperhelper.ScraperControllerSettings`mapstructure:",squash"`// MetricsBuilderConfig contains all the metrics// that can be configured.metadata.MetricsBuilderConfig`mapstructure:",squash"`}
OpenTelemetry Collector Development
Component Review
To recap the type of component we will need to capture metrics from Jenkins:
The business use case an extension helps solves for are:
Having shared functionality that requires runtime configuration
Indirectly helps with observing the runtime of the collector
This is commonly referred to pull vs push based data collection, and you read more about the details in the Receiver Overview.
The business use case a processor solves for is:
Adding or removing data, fields, or values
Observing and making decisions on the data
Buffering, queueing, and reordering
The thing to keep in mind is the data type flowing through a processor needs to forward
the same data type to its downstream components. Read through Processor Overview for the details.
The business use case an exporter solves for:
Send the data to a tool, service, or storage
The OpenTelemetry collector does not want to be “backend”, an all-in-one observability suite, but rather
keep to the principles that founded OpenTelemetry to begin with; A vendor agnostic Observability for all.
To help revisit the details, please read through Exporter Overview.
This is a component type that was missed in the workshop since it is a relatively new addition to the collector, but the best way to think about a connector is that it is like a processor that allows it to be used across different telemetry types and pipelines. Meaning that a connector can accept data as logs, and output metrics, or accept metrics from one pipeline and provide metrics on the data it has observed.
The business case that a connector solves for:
Converting from different telemetry types
logs to metrics
traces to metrics
metrics to logs
Observing incoming data and producing its own data
Accepting metrics and generating analytical metrics of the data.
There was a brief overview within the Ninja section as part of the Processor Overview,
and be sure what the project for updates for new connector components.
From the component overviews, it is clear that developing a pull-based receiver for Jenkins.
OpenTelemetry Collector Development
Designing The Metrics
To help define and export the metrics captured by our receiver, we will be using, mdatagen, a tool developed for the collector that turns YAML defined metrics into code.
---# Type defines the name to reference the component# in the configuration filetype:jenkins# Status defines the component type and the stability levelstatus:class:receiverstability:development:[metrics]# Attributes are the expected fields reported# with the exported values.attributes:job.name:description:The name of the associated Jenkins jobtype:stringjob.status:description:Shows if the job had passed, or failedtype:stringenum:- failed- success- unknown# Metrics defines all the pontentially exported values from this receiver. metrics:jenkins.jobs.count:enabled:truedescription:Provides a count of the total number of configured jobsunit:"{Count}"gauge:value_type:intjenkins.job.duration:enabled:truedescription:Show the duration of the jobunit:"s"gauge:value_type:intattributes:- job.name- job.statusjenkins.job.commit_delta:enabled:truedescription:The calculation difference of the time job was finished minus commit timestampunit:"s"gauge:value_type:intattributes:- job.name- job.status
// To generate the additional code needed to capture metrics, // the following command to be run from the shell:// go generate -x ./...//go:generate go run github.com/open-telemetry/opentelemetry-collector-contrib/cmd/mdatagen@v0.80.0 metadata.yamlpackagejenkinscireceiver// There is no code defined within this file.
Create these files within the project folder before continuing onto the next section.
Building The Factory
The Factory is a software design pattern that effectively allows for an object, in this case a jenkinscireceiver, to be created dynamically with the provided configuration. To use a more real-world example, it would be going to a phone store, asking for a phone
that matches your exact description, and then providing it to you.
Run the following command go generate -x ./... , it will create a new folder, jenkinscireceiver/internal/metadata, that contains all code required to export the defined metrics. The required code is:
packagejenkinscireceiverimport("errors""go.opentelemetry.io/collector/component""go.opentelemetry.io/collector/config/confighttp""go.opentelemetry.io/collector/receiver""go.opentelemetry.io/collector/receiver/scraperhelper""splunk.conf/workshop/example/jenkinscireceiver/internal/metadata")funcNewFactory()receiver.Factory{returnreceiver.NewFactory(metadata.Type,newDefaultConfig,receiver.WithMetrics(newMetricsReceiver,metadata.MetricsStability),)}funcnewMetricsReceiver(_context.Context,setreceiver.CreateSettings,cfgcomponent.Config,consumerconsumer.Metrics)(receiver.Metrics,error){// Convert the configuration into the expected typeconf,ok:=cfg.(*Config)if!ok{returnnil,errors.New("can not convert config")}sc,err:=newScraper(conf,set)iferr!=nil{returnnil,err}returnscraperhelper.NewScraperControllerReceiver(&conf.ScraperControllerSettings,set,consumer,scraperhelper.AddScraper(sc),)}
packagejenkinscireceiverimport("go.opentelemetry.io/collector/config/confighttp""go.opentelemetry.io/collector/receiver/scraperhelper""splunk.conf/workshop/example/jenkinscireceiver/internal/metadata")typeConfigstruct{// HTTPClientSettings contains all the values// that are commonly shared across all HTTP interactions// performed by the collector.confighttp.HTTPClientSettings`mapstructure:",squash"`// ScraperControllerSettings will allow us to schedule // how often to check for updates to builds.scraperhelper.ScraperControllerSettings`mapstructure:",squash"`// MetricsBuilderConfig contains all the metrics// that can be configured.metadata.MetricsBuilderConfig`mapstructure:",squash"`}funcnewDefaultConfig()component.Config{return&Config{ScraperControllerSettings:scraperhelper.NewDefaultScraperControllerSettings(metadata.Type),HTTPClientSettings:confighttp.NewDefaultHTTPClientSettings(),MetricsBuilderConfig:metadata.DefaultMetricsBuilderConfig(),}}
packagejenkinscireceivertypescraperstruct{}funcnewScraper(cfg*Config,setreceiver.CreateSettings)(scraperhelper.Scraper,error){// Create a our scraper with our values s:=scraper{// To be filled in later}returnscraperhelper.NewScraper(metadata.Type,s.scrape)}func(scraper)scrape(ctxcontext.Context)(pmetric.Metrics,error){// To be filled inreturnpmetrics.NewMetrics(),nil}
---dist:name:otelcoldescription:"Conf workshop collector"output_path:./distversion:v0.0.0-experimentalextensions:- gomod:github.com/open-telemetry/opentelemetry-collector-contrib/extension/basicauthextension v0.80.0- gomod:github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension v0.80.0receivers:- gomod:go.opentelemetry.io/collector/receiver/otlpreceiver v0.80.0- gomod:github.com/open-telemetry/opentelemetry-collector-contrib/receiver/jaegerreceiver v0.80.0- gomod:github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver v0.80.0- gomod:splunk.conf/workshop/example/jenkinscireceiver v0.0.0path:./jenkinscireceiverprocessors:- gomod:go.opentelemetry.io/collector/processor/batchprocessor v0.80.0exporters:- gomod:go.opentelemetry.io/collector/exporter/loggingexporter v0.80.0- gomod:go.opentelemetry.io/collector/exporter/otlpexporter v0.80.0- gomod:go.opentelemetry.io/collector/exporter/otlphttpexporter v0.80.0# This replace is a go directive that allows for redefine# where to fetch the code to use since the default would be from a remote project.replaces:- splunk.conf/workshop/example/jenkinscireceiver => ./jenkinscireceiver
Once you have written these files into the project with the expected contents run, go mod tidy, which will fetch all the remote dependencies and update go.mod and generate the go.sum files.
OpenTelemetry Collector Development
Building The Business Logic
At this point, we have a custom component that currently does nothing so we need to add in the required
logic to capture this data from Jenkins.
From this point, the steps that we need to take are:
Create a client that connect to Jenkins
Capture all the configured jobs
Report the status of the last build for the configured job
Calculate the time difference between commit timestamp and job completion.
The changes will be made to scraper.go.
To be able to connect to the Jenkins server, we will be using the package,
“github.com/yosida95/golang-jenkins”,
which provides the functionality required to read data from the jenkins server.
Then we are going to utilise some of the helper functions from the,
“go.opentelemetry.io/collector/receiver/scraperhelper” ,
library to create a start function so that we can connect to the Jenkins server once component has finished starting.
packagejenkinscireceiverimport("context"jenkins"github.com/yosida95/golang-jenkins""go.opentelemetry.io/collector/component""go.opentelemetry.io/collector/pdata/pmetric""go.opentelemetry.io/collector/receiver""go.opentelemetry.io/collector/receiver/scraperhelper""splunk.conf/workshop/example/jenkinscireceiver/internal/metadata")typescraperstruct{mb*metadata.MetricsBuilderclient*jenkins.Jenkins}funcnewScraper(cfg*Config,setreceiver.CreateSettings)(scraperhelper.Scraper,error){s:=&scraper{mb:metadata.NewMetricsBuilder(cfg.MetricsBuilderConfig,set),}returnscraperhelper.NewScraper(metadata.Type,s.scrape,scraperhelper.WithStart(func(ctxcontext.Context,hcomponent.Host)error{client,err:=cfg.ToClient(h,set.TelemetrySettings)iferr!=nil{returnerr}// The collector provides a means of injecting authentication// on our behalf, so this will ignore the libraries approach// and use the configured http client with authentication.s.client=jenkins.NewJenkins(nil,cfg.Endpoint)s.client.SetHTTPClient(client)returnnil}),)}func(sscraper)scrape(ctxcontext.Context)(pmetric.Metrics,error){// To be filled inreturnpmetric.NewMetrics(),nil}
This finishes all the setup code that is required in order to initialise a Jenkins receiver.
From this point on, we will be focuses on the scrape method that has been waiting to be filled in.
This method will be run on each interval that is configured within the configuration (by default, every minute).
The reason we want to capture the number of jobs configured so we can see the growth of our Jenkins server,
and measure of many projects have onboarded. To do this we will call the jenkins client to list all jobs,
and if it reports an error, return that with no metrics, otherwise, emit the data from the metric builder.
func(sscraper)scrape(ctxcontext.Context)(pmetric.Metrics,error){jobs,err:=s.client.GetJobs()iferr!=nil{returnpmetric.Metrics{},err}// Recording the timestamp to ensure// all captured data points within this scrape have the same value. now:=pcommon.NewTimestampFromTime(time.Now())// Casting to an int64 to match the expected types.mb.RecordJenkinsJobsCountDataPoint(now,int64(len(jobs)))// To be filled inreturns.mb.Emit(),nil}
In the last step, we were able to capture all jobs ands report the number of jobs
there was. Within this step, we are going to examine each job and use the report values
to capture metrics.
func(sscraper)scrape(ctxcontext.Context)(pmetric.Metrics,error){jobs,err:=s.client.GetJobs()iferr!=nil{returnpmetric.Metrics{},err}// Recording the timestamp to ensure// all captured data points within this scrape have the same value. now:=pcommon.NewTimestampFromTime(time.Now())// Casting to an int64 to match the expected types.mb.RecordJenkinsJobsCountDataPoint(now,int64(len(jobs)))for_,job:=rangejobs{// Ensure we have valid results to start off withvar(build=job.LastCompletedBuildstatus=metadata.AttributeJobStatusUnknown)// This will check the result of the job, however,// since the only defined attributes are // `success`, `failure`, and `unknown`. // it is assume that anything did not finish // with a success or failure to be an unknown status.switchbuild.Result{case"aborted","not_built","unstable":status=metadata.AttributeJobStatusUnknowncase"success":status=metadata.AttributeJobStatusSuccesscase"failure":status=metadata.AttributeJobStatusFailed}s.mb.RecordJenkinsJobDurationDataPoint(now,int64(job.LastCompletedBuild.Duration),job.Name,status,)}returns.mb.Emit(),nil}
The final step is to calculate how long it took from
commit to job completion to help infer our DORA metrics.
func(sscraper)scrape(ctxcontext.Context)(pmetric.Metrics,error){jobs,err:=s.client.GetJobs()iferr!=nil{returnpmetric.Metrics{},err}// Recording the timestamp to ensure// all captured data points within this scrape have the same value. now:=pcommon.NewTimestampFromTime(time.Now())// Casting to an int64 to match the expected types.mb.RecordJenkinsJobsCountDataPoint(now,int64(len(jobs)))for_,job:=rangejobs{// Ensure we have valid results to start off withvar(build=job.LastCompletedBuildstatus=metadata.AttributeJobStatusUnknown)// Previous step here// Ensure that the `ChangeSet` has values// set so there is a valid value for us to referenceiflen(build.ChangeSet.Items)==0{continue}// Making the assumption that the first changeset// item is the most recent change.change:=build.ChangeSet.Items[0]// Record the difference from the build time// compared against the change timestamp.s.mb.RecordJenkinsJobCommitDeltaDataPoint(now,int64(build.Timestamp-change.Timestamp),job.Name,status,)}returns.mb.Emit(),nil}
Once all of these steps have been completed, you now have built a custom Jenkins CI receiver!
Whats next?
There are more than likely features that would be desired from component that you can think of, like:
Can I include the branch name that the job used?
Can I include the project name for the job?
How I calculate the collective job durations for project?
How do I validate the changes work?
Please take this time to play around, break it, change things around, or even try to capture logs from the builds.
The goal of this workshop is to help you gain confidence in creating and modifying OpenTelemetry Collector configuration files. You’ll start with a minimal agent.yaml file and progressively build it out to handle several advanced, real-world scenarios.
A key focus of this workshop is learning how to configure the OpenTelemetry Collector to store telemetry data locally, rather than sending it to a third-party vendor backend. This approach not only simplifies debugging and troubleshooting but is also ideal for testing and development environments where you want to avoid sending data to production systems.
To make the most of this workshop, you should have:
A basic understanding of the OpenTelemetry Collector and its configuration file structure.
Proficiency in editing YAML files.
Everything in this workshop is designed to run locally, ensuring a hands-on and accessible learning experience. Let’s dive in and start building!
Workshop Overview
During this workshop, we will cover the following topics:
Setting up the agent locally: Add metadata, and introduce the debug and file exporters.
Configuring a gateway: Route traffic from the agent to the gateway.
Configuring the FileLog receiver: Collect log data from various log files.
Enhancing agent resilience: Basic configurations for fault tolerance.
Configuring processors:
Filter out noise by dropping specific spans (e.g., health checks).
Remove unnecessary tags, and handle sensitive data.
Transform data using OTTL (OpenTelemetry Transformation Language) in the pipeline before exporting.
Configuring Connectors:
Route data to different endpoints based on the values received.
Convert log and span data to metrics.
By the end of this workshop, you’ll be familiar with configuring the OpenTelemetry Collector for a variety of real-world use cases.
Subsections of Advanced Collector Configuration
Pre-requisites
90 minutes
Prerequisites
Proficiency in editing YAML files using vi, vim, nano, or your preferred text editor.
Create a workshop directory: In your environment create a new directory (e.g., advanced-otel-workshop). We will refer to this directory as [WORKSHOP] for the remainder of the workshop.
Download workshop binaries: Change into your [WORKSHOP] directory and download the OpenTelemetry Collector and Load Generator binaries:
Before running the binaries on macOS, you need to remove the quarantine attribute that macOS applies to downloaded files. This step ensures they can execute without restrictions.
During this workshop, you will be using up to four terminal windows simultaneously. To stay organized, consider customizing each terminal or shell with unique names and colors. This will help you quickly identify and switch between them as needed.
We will refer to these terminals as: Agent, Gateway, Spans and Logs.
Exercise
In the Agent terminal window, change into the [WORKSHOP] directory and create a new subdirectory named 1-agent.
mkdir 1-agent &&\
cd 1-agent
Create a file named agent.yaml. This file will define the basic structure of an OpenTelemetry Collector configuration.
Copy and paste the following initial configuration into agent.yaml:
# Extensionsextensions:health_check:# Health Check Extensionendpoint:0.0.0.0:13133# Health Check Endpoint# Receiversreceivers:hostmetrics:# Host Metrics Receivercollection_interval:3600s # Collection Interval (1hr)scrapers:cpu:# CPU Scraperotlp:# OTLP Receiverprotocols:http:# Configure HTTP protocolendpoint:"0.0.0.0:4318"# Endpoint to bind to# Exportersexporters:debug:# Debug Exporterverbosity:detailed # Detailed verbosity level# Processorsprocessors:memory_limiter:# Limits memory usagecheck_interval:2s # Check intervallimit_mib:512# Memory limit in MiBresourcedetection:# Resource Detection Processordetectors:[system] # Detect system resourcesoverride:true# Overwrites existing attributesresource/add_mode:# Resource Processorattributes:- action:insert # Action to performkey:otelcol.service.mode # Key namevalue:"agent"# Key value# Connectorsconnectors:# Service Section - Enabled Pipelinesservice:extensions:- health_check # Health Check Extensionpipelines:traces:receivers:- otlp # OTLP Receiverprocessors:- memory_limiter # Memory Limiter processor- resourcedetection # Add system attributes to the data- resource/add_mode # Add collector mode metadataexporters:- debug # Debug Exportermetrics:receivers:- otlpprocessors:- memory_limiter- resourcedetection- resource/add_modeexporters:- debuglogs:receivers:- otlpprocessors:- memory_limiter- resourcedetection- resource/add_modeexporters:- debug
Your directory structure should now look like this:
In this workshop, we’ll use https://otelbin.io to quickly validate YAML syntax and ensure your OpenTelemetry configurations are accurate. This step helps avoid errors before running tests during the session.
Exercise
Here’s how to validate your configuration:
Open https://otelbin.io and replace the existing configuration by pasting your YAML into the left pane.
Info
If are on a Mac and not using a Splunk Workshop instance, you can quickly copy the contents of the agent.yaml file to your clipboard by running the following command:
cat agent.yaml | pbcopy
At the top of the page, make sure Splunk OpenTelemetry Collector is selected as the validation target. If you don’t select this option, then you will see warnings in the UI stating Receiver "hostmetrics" is unused. (Line 8).
Once validated, refer to the image representation below to confirm your pipelines are set up correctly.
In most cases, we’ll display only the key pipeline. However, if all three pipelines (Traces, Metrics, and Logs) share the same structure, we’ll note this instead of showing each one individually.
%%{init:{"fontFamily":"monospace"}}%%
graph LR
%% Nodes
REC1( otlp <br>fa:fa-download):::receiver
PRO1(memory_limiter<br>fa:fa-microchip):::processor
PRO2(resourcedetection<br>fa:fa-microchip):::processor
PRO3(resource<br>fa:fa-microchip<br>add_mode):::processor
EXP1( debug <br>fa:fa-upload):::exporter
%% Links
subID1:::sub-traces
subgraph " "
subgraph subID1[**Traces/Metrics/Logs**]
direction LR
REC1 --> PRO1
PRO1 --> PRO2
PRO2 --> PRO3
PRO3 --> EXP1
end
end
classDef receiver,exporter fill:#8b5cf6,stroke:#333,stroke-width:1px,color:#fff;
classDef processor fill:#6366f1,stroke:#333,stroke-width:1px,color:#fff;
classDef con-receive,con-export fill:#45c175,stroke:#333,stroke-width:1px,color:#fff;
classDef sub-traces stroke:#fff,stroke-width:1px, color:#fff,stroke-dasharray: 3 3;
Load Generation Tool
For the is workshop we have specifically developed a loadgen tool. loadgen is a flexible load generator for simulating traces and logging activities. It supports base, health, and security traces by default, along with optional logging of random quotes to a file, either in plain text or JSON format.
The output generated by loadgen mimic those produced by an OpenTelemetry instrumentation library, allowing us to test the Collector’s processing logic and offers a simple yet powerful way to mimic real-world scenarios.
1.2 Test Agent Configuration
You’re ready to start the OpenTelemetry Collector with the newly created agent.yaml. This exercise sets the foundation for understanding how data flows through the OpenTelemetry Collector.
Exercise
Start the Agent: In the Agent terminal window run the following command:
../otelcol --config=agent.yaml
Verify debug output: If everything is configured correctly, the first and last lines of the output will look like:
2025/01/13T12:43:51 settings.go:478: Set config to [agent.yaml]
<snip to the end>
2025-01-13T12:43:51.747+0100 info service@v0.120.0/service.go:261 Everything is ready. Begin running and processing data.
Send Test Span: Instead of instrumenting an application, we’ll simulate sending trace data to the OpenTelemetry Collector using the loadgen tool.
In the Spans terminal window, change into the 1-agent directory and run the following command to send a single span:
../loadgen -count 1
Sending traces. Use Ctrl-C to stop.
Response: {"partialSuccess":{}}
Base trace sent with traceId: 1aacb1db8a6d510f10e52f154a7fdb90 and spanId: 7837a3a2d3635d9f
{"partialSuccess":{}}: Indicates 100% success, as the partialSuccess field is empty. In case of a partial failure, this field will include details about any failed parts.
Verify Debug Output:
In the Agent terminal window check the collector’s debug output:
2025-03-06T10:11:35.174Z info Traces {"otelcol.component.id": "debug", "otelcol.component.kind": "Exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 1}
2025-03-06T10:11:35.174Z info ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
-> service.name: Str(cinema-service)
-> deployment.environment: Str(production)
-> host.name: Str(workshop-instance)
-> os.type: Str(linux)
-> otelcol.service.mode: Str(agent)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope cinema.library 1.0.0
InstrumentationScope attributes:
-> fintest.scope.attribute: Str(Starwars, LOTR)
Span #0
Trace ID : 0ef4daa44a259a7199a948231bc383c0
Parent ID :
ID : e8fdd442c36cbfb1
Name : /movie-validator
Kind : Server
Start time : 2025-03-06 10:11:35.163557 +0000 UTC
End time : 2025-03-06 10:11:36.163557 +0000 UTC
Status code : Ok
Status message : Success
Attributes:
-> user.name: Str(George Lucas)
-> user.phone_number: Str(+1555-867-5309)
-> user.email: Str(george@deathstar.email)
-> user.password: Str(LOTR>StarWars1-2-3)
-> user.visa: Str(4111 1111 1111 1111)
-> user.amex: Str(3782 822463 10005)
-> user.mastercard: Str(5555 5555 5555 4444)
-> payment.amount: Double(86.48)
{"otelcol.component.id": "debug", "otelcol.component.kind": "Exporter", "otelcol.signal": "traces"}
Important
Stop the agent in the Agent terminal window using Ctrl-C.
1.3 File Exporter
To capture more than just debug output on the screen, we also want to generate output during the export phase of the pipeline. For this, we’ll add a File Exporter to write OTLP data to files for comparison.
The difference between the OpenTelemetry debug exporter and the file exporter lies in their purpose and output destination:
Feature
Debug Exporter
File Exporter
Output Location
Console/Log
File on disk
Purpose
Real-time debugging
Persistent offline analysis
Best for
Quick inspection during testing
Temporary storage and sharing
Production Use
No
Rare, but possible
Persistence
No
Yes
In summary, the Debug Exporter is great for real-time, in-development troubleshooting, while the File Exporter is better suited for storing telemetry data locally for later use.
Exercise
In the Agent terminal window ensure the collector is not running then edit the agent.yaml and configure the File Exporter:
Configuring a file exporter: The File Exporter writes telemetry data to files on disk.
file:# File Exporterpath:"./agent.out"# Save path (OTLP/JSON)append:false# Overwrite the file each time
Update the Pipelines Section: Add the file exporter to the traces pipeline only:
Restart your agent: Find your Agent terminal window, and (re)start the agent using the modified configuration:
../otelcol --config=agent.yaml
2025-01-13T12:43:51.747+0100 info service@v0.120.0/service.go:261 Everything is ready. Begin running and processing data.
Send a Trace: From the Spans terminal window send another span and verify you get the same output on the console as we saw previously:
../loadgen -count 1
Verify that the agent.out file is written: Check that a file named agent.out is written in the current directory.
.
├── agent.out # OTLP/Json output created by the File Exporter
└── agent.yaml
Verify the span format:
Verify the format used by the File Exporter to write the span to agent.out.
The output will be a single line in OTLP/JSON format.
To view the contents of agent.out, you can use the cat ./agent.out command. For a more readable formatted view, pipe the output to jq like this: cat ./agent.out | jq:
Stop the agent in the Agent terminal window using Ctrl-C.
2. Gateway Configuration
10 minutes
The OpenTelemetry Gateway is designed to receive, process, and export telemetry data. It acts as an intermediary between telemetry sources (e.g. applications, services) and backends (e.g., observability platforms like Prometheus, Jaeger, or Splunk Observability Cloud).
The gateway is useful because it centralizes telemetry data collection, enabling features like data filtering, transformation, and routing to multiple destinations. It also reduces the load on individual services by offloading telemetry processing and ensures consistent data formats across distributed systems. This makes it easier to manage, scale, and analyze telemetry data in complex environments.
Exercise
In the Gateway terminal window, change into the [WORKSHOP] directory and create a new subdirectory named 2-gateway.
Important
Change ALL terminal windows to the [WORKSHOP]/2-gateway directory.
Back in the Gateway terminal window, copy agent.yaml from the 1-agent directory into 2-gateway.
Create a file called gateway.yaml and add the following initial configuration:
########################### This section holds all the## Configuration section ## configurations that can be ########################### used in this OpenTelemetry Collectorextensions:# List of extensionshealth_check:# Health check extensionendpoint:0.0.0.0:14133# Custom port to avoid conflictsreceivers:otlp:# OTLP receiverprotocols:http:# HTTP protocolendpoint:"0.0.0.0:5318"# Custom port to avoid conflictsinclude_metadata:true# Required for token pass-throughexporters:# List of exportersdebug:# Debug exporterverbosity:detailed # Enable detailed debug outputfile/traces:# Exporter Type/Namepath:"./gateway-traces.out"# Path for OTLP JSON outputappend:false# Overwrite the file each timefile/metrics:# Exporter Type/Namepath:"./gateway-metrics.out"# Path for OTLP JSON outputappend:false# Overwrite the file each timefile/logs:# Exporter Type/Namepath:"./gateway-logs.out"# Path for OTLP JSON outputappend:false# Overwrite the file each timeconnectors:processors:# List of processorsmemory_limiter:# Limits memory usagecheck_interval:2s # Memory check intervallimit_mib:512# Memory limit in MiBbatch:# Batches data before exportingmetadata_keys:# Groups data by token- X-SF-Tokenresource/add_mode:# Adds metadataattributes:- action:upsert # Inserts or updates a keykey:otelcol.service.mode # Key namevalue:"gateway"# Key value############################## Activation Section ##############################service:# Service configurationtelemetry:metrics:level:none # Disable metricsextensions:[health_check] # Enabled extensionspipelines:# Configured pipelinestraces:# Traces pipelinereceivers:- otlp # OTLP receiverprocessors:# Processors for traces- memory_limiter- resource/add_mode- batchexporters:- debug # Debug exporter- file/tracesmetrics:# Metrics pipelinereceivers:- otlp # OTLP receiverprocessors:# Processors for metrics- memory_limiter- resource/add_mode- batchexporters:- debug # Debug exporter- file/metricslogs:# Logs pipelinereceivers:- otlp # OTLP receiverprocessors:# Processors for logs- memory_limiter- resource/add_mode- batchexporters:- debug # Debug exporter- file/logs
Note
When the gateway is started it will generate three files: gateway-traces.out, gateway-metrics.out, and gateway-logs.out. These files will eventually contain the telemetry data received by the gateway.
.
├── agent.yaml
└── gateway.yaml
Subsections of 2. Gateway Setup
2.1 Start Gateway
The configuration for the gateway does not need any additional configuration changes to function. This has been done to save time and focus on the core concepts of the Gateway.
Validate the gateway configuration using otelbin.io. For reference, the logs: section of your pipelines will look similar to this:
Start the Gateway: In the Gateway terminal window, run the following command to start the gateway:
../otelcol --config=gateway.yaml
If everything is configured correctly, the first and last lines of the output should look like:
2025/01/15 15:33:53 settings.go:478: Set config to [gateway.yaml]
<snip to the end>
2025-01-13T12:43:51.747+0100 info service@v0.120.0/service.go:261 Everything is ready. Begin running and processing data.
Next, we will configure the agent to send data to the newly created gateway.
2.2 Configure Agent
Exercise
Add the otlphttp exporter: The OTLP/HTTP Exporter is used to send data from the agent to the gateway using the OTLP/HTTP protocol.
Switch to your Agent terminal window.
Validate that the newly generated gateway-logs.out, gateway-metrics.out, and gateway-traces.out are present in the directory.
Open the agent.yaml file in your editor.
Add the otlphttp exporter configuration to the exporters: section:
Add a Batch Processor configuration: The Batch Processor will accept spans, metrics, or logs and place them into batches. Batching helps better compress the data and reduce the number of outgoing connections required to transmit the data. It is highly recommended configuring the batch processor on every collector.
Add the batch processor configuration to the processors: section:
batch:# Processor Type
Update the pipelines:
Enable Hostmetrics Receiver:
Add hostmetrics to the metrics pipeline. The HostMetrics Receiver will generate host CPU metrics once per hour with the current configuration.
Enable Batch Processor:
Add the batch processor (after the resource/add_mode processor) to the traces, metrics, and logs pipelines.
Enable OTLPHTTP Exporter:
Add the otlphttp exporter to the traces, metrics, and logs pipelines.
Start the Agent: In the Agent terminal window start the agent with the updated configuration:
../otelcol --config=agent.yaml
Verify CPU Metrics:
Check that when the agent starts, it immediately starts sending CPU metrics.
Both the agent and the gateway will display this activity in their debug output. The output should resemble the following snippet:
<snip>
NumberDataPoints #37
Data point attributes:
-> cpu: Str(cpu0)
-> state: Str(system)
StartTimestamp: 2024-12-09 14:18:28 +0000 UTC
Timestamp: 2025-01-15 15:27:51.319526 +0000 UTC
Value: 9637.660000
At this stage, the agent continues to collect CPU metrics once per hour or upon each restart and sends them to the gateway.
The gateway processes these metrics and exports them to a file named ./gateway-metrics.out. This file stores the exported metrics as part of the pipeline service.
Verify Data arrived at Gateway: To confirm that CPU metrics, specifically for cpu0, have successfully reached the gateway, we’ll inspect the gateway-metrics.out file using the jq command.
The following command filters and extracts the system.cpu.time metric, focusing on cpu0. It displays the metric’s state (e.g., user, system, idle, interrupt) along with the corresponding values.
Run the command below in the Tests terminal to check the system.cpu.time metric:
In the Spans terminal window, run the following command to send 5 spans and validate the output of the agent and gateway debug logs:
../loadgen -count 5
2025-03-06T11:49:00.456Z info Traces {"otelcol.component.id": "debug", "otelcol.component.kind": "Exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 1}
2025-03-06T11:49:00.456Z info ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
-> service.name: Str(cinema-service)
-> deployment.environment: Str(production)
-> host.name: Str(workshop-instance)
-> os.type: Str(linux)
-> otelcol.service.mode: Str(agent)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope cinema.library 1.0.0
InstrumentationScope attributes:
-> fintest.scope.attribute: Str(Starwars, LOTR)
Span #0
Trace ID : 97fb4e5b13400b5689e3306da7cff077
Parent ID :
ID : 413358465e5b4f15
Name : /movie-validator
Kind : Server
Start time : 2025-03-06 11:49:00.431915 +0000 UTC
End time : 2025-03-06 11:49:01.431915 +0000 UTC
Status code : Ok
Status message : Success
Attributes:
-> user.name: Str(George Lucas)
-> user.phone_number: Str(+1555-867-5309)
-> user.email: Str(george@deathstar.email)
-> user.password: Str(LOTR>StarWars1-2-3)
-> user.visa: Str(4111 1111 1111 1111)
-> user.amex: Str(3782 822463 10005)
-> user.mastercard: Str(5555 5555 5555 4444)
-> payment.amount: Double(87.01)
{"otelcol.component.id": "debug", "otelcol.component.kind": "Exporter", "otelcol.signal": "traces"}
Verify the Gateway has handled the spans: Once the gateway processes incoming spans, it writes the trace data to a file named gateway-traces.out. To confirm that the spans have been successfully handled, you can inspect this file.
Using the jq command, you can extract and display key details about each span, such as its spanId and its position in the file. Also, we can extract the attributes that the Hostmetrics Receiver added to the spans.
jq -c '.resourceSpans[] as $resource | $resource.scopeSpans[].spans[] | "Span \(input_line_number) found with spanId \(.spanId), hostname \($resource.resource.attributes[] | select(.key == "host.name") | .value.stringValue), os \($resource.resource.attributes[] | select(.key == "os.type") | .value.stringValue)"' ./gateway-traces.out
"Span 1 found with spanId d71fe6316276f97d, hostname workshop-instance, os linux"
"Span 2 found with spanId e8d19489232f8c2a, hostname workshop-instance, os linux"
"Span 3 found with spanId 9dfaf22857a6bd05, hostname workshop-instance, os linux"
"Span 4 found with spanId c7f544a4b5fef5fc, hostname workshop-instance, os linux"
"Span 5 found with spanId 30bb49261315969d, hostname workshop-instance, os linux"
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
3. FileLog Setup
10 minutes
The FileLog Receiver in the OpenTelemetry Collector is used to ingest logs from files.
It monitors specified files for new log entries and streams those logs into the Collector for further processing or exporting. It is also useful for testing and development purposes.
For this part of the workshop, the loadgen will generate logs using random quotes:
lotrQuotes:=[]string{"One does not simply walk into Mordor.","Even the smallest person can change the course of the future.","All we have to decide is what to do with the time that is given us.","There is some good in this world, and it's worth fighting for.",}starWarsQuotes:=[]string{"Do or do not, there is no try.","The Force will be with you. Always.","I find your lack of faith disturbing.","In my experience, there is no such thing as luck.",}
The FileLog receiver in the agent will read these log lines and send them to the gateway.
Exercise
In the Logs terminal window, change into the [WORKSHOP] directory and create a new subdirectory named 3-filelog.
Next, copy *.yaml from 2-gateway into 3-filelog.
Important
Change ALL terminal windows to the [WORKSHOP]/3-filelog directory.
Start the loadgen and this will begin writing lines to a file named quotes.log:
../loadgen -logs
Writing logs to quotes.log. Press Ctrl+C to stop.
.
├── agent.yaml
├── gateway.yaml
└── quotes.yaml
Subsections of 3. FileLog Setup
3.1 Configuration
Exercise
In the Agent terminal window edit the agent.yaml and configure the FileLog receiver.
Configure FileLog Receiver: The filelog receiver reads log data from a file and includes custom resource attributes in the log data:
filelog/quotes:# Receiver Type/Nameinclude:./quotes.log # The file to read log data frominclude_file_path:true# Include file path in the log datainclude_file_name:false# Exclude file name from the log dataresource:# Add custom resource attributescom.splunk.source:./quotes.log # Source of the log datacom.splunk.sourcetype:quotes # Source type of the log data
Update logs pipeline: Add thefilelog/quotes receiver to the logs pipeline only:
Start the Gateway: In your Gateway terminal window start the gateway.
Start the Agent: In your Agent terminal window start the agent.
A continuous stream of log data from the quotes.log will be in the agent and gateway debug logs:
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(2025-03-06 15:18:32 [ERROR] - There is some good in this world, and it's worth fighting for. LOTR)
Attributes:
-> log.file.path: Str(quotes.log)
Trace ID:
Span ID:
Flags: 0
LogRecord #1
Stop the loadgen: In the Logs terminal window, stop the loadgen using Ctrl-C.
Verify the gateway: Check if the gateway has written a ./gateway-logs.out file.
At this point, your directory structure will appear as follows:
.
├── agent.out
├── agent.yaml
├── gateway-logs.out # Output from the logs pipeline
├── gateway-metrics.out # Output from the metrics pipeline
├── gateway-traces.out # Output from the traces pipeline
├── gateway.yaml
└── quotes.log # File containing Random log lines
Examine a log line: In gateway-logs.out compare a log line with the snippet below. Verify that the log entry includes the same attributes as we have seen in metrics and traces data previously:
{"resourceLogs":[{"resource":{"attributes":[{"key":"com.splunk.source","value":{"stringValue":"./quotes.log"}},{"key":"com.splunk.sourcetype","value":{"stringValue":"quotes"}},{"key":"host.name","value":{"stringValue":"workshop-instance"}},{"key":"os.type","value":{"stringValue":"linux"}},{"key":"otelcol.service.mode","value":{"stringValue":"gateway"}}]},"scopeLogs":[{"scope":{},"logRecords":[{"observedTimeUnixNano":"1741274312475540000","body":{"stringValue":"2025-03-06 15:18:32 [DEBUG] - All we have to decide is what to do with the time that is given us. LOTR"},"attributes":[{"key":"log.file.path","value":{"stringValue":"quotes.log"}}],"traceId":"","spanId":""},{"observedTimeUnixNano":"1741274312475560000","body":{"stringValue":"2025-03-06 15:18:32 [DEBUG] - Your focus determines your reality. SW"},"attributes":[{"key":"log.file.path","value":{"stringValue":"quotes.log"}}],"traceId":"","spanId":""}]}],"schemaUrl":"https://opentelemetry.io/schemas/1.6.1"}]}
{"resourceLogs":[{"resource":{"attributes":[{"key":"com.splunk.source","value":{"stringValue":"./quotes.log"}},{"key":"com.splunk.sourcetype","value":{"stringValue":"quotes"}},{"key":"host.name","value":{"stringValue":"workshop-instance"}},{"key":"os.type","value":{"stringValue":"linux"}},{"key":"otelcol.service.mode","value":{"stringValue":"gateway"}}]},"scopeLogs":[{"scope":{},"logRecords":[{"observedTimeUnixNano":"1741274312475540000","body":{"stringValue":"2025-03-06 15:18:32 [DEBUG] - All we have to decide is what to do with the time that is given us. LOTR"},"attributes":[{"key":"log.file.path","value":{"stringValue":"quotes.log"}}],"traceId":"","spanId":""},{"observedTimeUnixNano":"1741274312475560000","body":{"stringValue":"2025-03-06 15:18:32 [DEBUG] - Your focus determines your reality. SW"},"attributes":[{"key":"log.file.path","value":{"stringValue":"quotes.log"}}],"traceId":"","spanId":""}]}],"schemaUrl":"https://opentelemetry.io/schemas/1.6.1"}]}
You may also have noticed that every log line contains empty placeholders for "traceId":"" and "spanId":"".
The FileLog receiver will populate these fields only if they are not already present in the log line.
For example, if the log line is generated by an application instrumented with an OpenTelemetry instrumentation library, these fields will already be included and will not be overwritten.
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
4. Building In Resilience
10 minutes
The OpenTelemetry Collector’s FileStorage Extension enhances the resilience of your telemetry pipeline by providing reliable checkpointing, managing retries, and handling temporary failures effectively.
With this extension enabled, the OpenTelemetry Collector can store intermediate states on disk, preventing data loss during network disruptions and allowing it to resume operations seamlessly.
Note
This solution will work for metrics as long as the connection downtime is brief—up to 15 minutes. If the downtime exceeds this, Splunk Observability Cloud will drop data due to datapoints being out of order.
For logs, there are plans to implement a more enterprise-ready solution in one of the upcoming Splunk OpenTelemetry Collector releases.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 4-resilience.
Next, copy *.yaml from the 3-filelog directory into 4-resilience.
Important
Change ALL terminal windows to the [WORKSHOP]/4-resilience directory.
Your updated directory structure will now look like this:
.
├── agent.yaml
└── gateway.yaml
Subsections of 4. Building Resilience
4.1 File Storage Configuration
In this exercise, we will update the extensions: section of the agent.yaml file. This section is part of the OpenTelemetry configuration YAML and defines optional components that enhance or modify the OpenTelemetry Collector’s behavior.
While these components do not process telemetry data directly, they provide valuable capabilities and services to improve the Collector’s functionality.
Exercise
Update the agent.yaml: In the Agent terminal window, add the file_storage extension and name it checkpoint:
file_storage/checkpoint:# Extension Type/Namedirectory:"./checkpoint-dir"# Define directorycreate_directory:true# Create directorytimeout:1s # Timeout for file operationscompaction:# Compaction settingson_start:true# Start compaction at Collector startup# Define compaction directorydirectory:"./checkpoint-dir/tmp"max_transaction_size:65536# Max. size limit before compaction occurs
Add file_storage to existing otlphttp exporter: Modify the otlphttp: exporter to configure retry and queuing mechanisms, ensuring data is retained and resent if failures occur:
otlphttp:endpoint:"http://localhost:5318"retry_on_failure:enabled:true# Enable retry on failuresending_queue:# enabled:true# Enable sending queuenum_consumers:10# No. of consumersqueue_size:10000# Max. queue sizestorage:file_storage/checkpoint# File storage extension
Update the services section: Add the file_storage/checkpoint extension to the existing extensions: section. This will cause the extension to be enabled:
service:extensions:- health_check- file_storage/checkpoint # Enabled extensions for this collector
Update the metrics pipeline: For this exercise we are going to remove the hostmetrics receiver from the Metric pipeline to reduce debug and log noise:
Next, we will configure our environment to be ready for testing the File Storage configuration.
Exercise
Start the Gateway: In the Gateway terminal window navigate to the [WORKSHOP]/4-resilience directory and run:
../otelcol --config=gateway.yaml
Start the Agent: In the Agent terminal window navigate to the [WORKSHOP]/4-resilience directory and run:
../otelcol --config=agent.yaml
Send five test spans: In the Spans terminal window navigate to the [WORKSHOP]/4-resilience directory and run:
../loadgen -count 5
Both the agent and gateway should display debug logs, and the gateway should create a ./gateway-traces.out file.
If everything functions correctly, we can proceed with testing system resilience.
4.3 Simulate Failure
To assess the Agent’s resilience, we’ll simulate a temporary gateway outage and observe how the agent handles it:
Summary:
Send Traces to the Agent – Generate traffic by sending traces to the agent.
Stop the Gateway – This will trigger the agent to enter retry mode.
Restart the Gateway – The agent will recover traces from its persistent queue and forward them successfully. Without the persistent queue, these traces would have been lost permanently.
Exercise
Simulate a network failure: In the Gateway terminal stop the gateway with Ctrl-C and wait until the gateway console shows that it has stopped:
2025-01-28T13:24:32.785+0100 info service@v0.120.0/service.go:309 Shutdown complete.
Send traces: In the Spans terminal window send five more traces using the loadgen.
Notice that the agent’s retry mechanism is activated as it continuously attempts to resend the data. In the agent’s console output, you will see repeated messages similar to the following:
2025-01-28T14:22:47.020+0100 info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "traces", "name": "otlphttp", "error": "failed to make an HTTP request: Post \"http://localhost:5318/v1/traces\": dial tcp 127.0.0.1:5318: connect: connection refused", "interval": "9.471474933s"}
Stop the Agent: In the Agent terminal window, use Ctrl-C to stop the agent. Wait until the agent’s console confirms it has stopped:
2025-01-28T14:40:28.702+0100 info extensions/extensions.go:66 Stopping extensions...
2025-01-28T14:40:28.702+0100 info service@v0.120.0/service.go:309 Shutdown complete.
Tip
Stopping the agent will halt its retry attempts and prevent any future retry activity.
If the agent runs for too long without successfully delivering data, it may begin dropping traces, depending on the retry configuration, to conserve memory. By stopping the agent, any metrics, traces, or logs currently stored in memory are lost before being dropped, ensuring they remain available for recovery.
This step is essential for clearly observing the recovery process when the agent is restarted.
4.4 Recovery
In this exercise, we’ll test how the OpenTelemetry Collector recovers from a network outage by restarting the Gateway. When the gateway becomes available again, the agent will resume sending data from its last checkpointed state, ensuring no data loss.
Exercise
Restart the Gateway: In the Gateway terminal window run:
../otelcol --config=gateway.yaml
Restart the Agent: In the Agent terminal window run:
../otelcol --config=agent.yaml
After the agent is up and running, the File_Storage extension will detect buffered data in the checkpoint folder. It will start to dequeue the stored spans from the last checkpoint folder, ensuring no data is lost.
Exercise
Verify the Agent Debug output Note that the Agent Debug Screen does NOT change and still shows the following line indicating no new data is being exported.
2025-02-07T13:40:12.195+0100 info service@v0.120.0/service.go:253 Everything is ready. Begin running and processing data.
Watch the Gateway Debug output You should see from the gateway debug screen, it has started receiving the previously missed traces without requiring any additional action on your part.
2025-02-07T12:44:32.651+0100 info service@v0.120.0/service.go:253 Everything is ready. Begin running and processing data.
2025-02-07T12:47:46.721+0100 info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 4, "spans": 4}
2025-02-07T12:47:46.721+0100 info ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
Check the gateway-traces.out file Using jq, count the number of traces in the recreated gateway-traces.out. It should match the number you send when the gateway was down.
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
Conclusion
This exercise demonstrated how to enhance the resilience of the OpenTelemetry Collector by configuring the file_storage extension, enabling retry mechanisms for the otlp exporter, and using a file-backed queue for temporary data storage.
By implementing file-based checkpointing and queue persistence, you ensure the telemetry pipeline can gracefully recover from temporary interruptions, making it a more robust and reliable for production environments.
5. Dropping Spans
10 minutes
In this section, we will explore how to use the Filter Processor to selectively drop spans based on certain conditions.
Specifically, we will drop traces based on the span name, which is commonly used to filter out unwanted spans such as health checks or internal communication traces. In this case, we will be filtering out spans whose name is "/_healthz", typically associated with health check requests and usually are quite “noisy”.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 5-dropping-spans.
Next, copy *.yaml from the 4-resilience directory into 5-dropping-spans.
Important
Change ALL terminal windows to the [WORKSHOP]/5-dropping-spans directory.
Your updated directory structure will now look like this:
.
├── agent.yaml
└── gateway.yaml
Next, we will configure the filter processor and the respective pipelines.
Subsections of 5. Dropping Spans
5.1 Configuration
Exercise
Switch to your Gateway terminal window and open the gateway.yaml file. Update the processors section with the following configuration:
Add a filter processor: Configure the gateway to exclude spans with the name /_healthz. The error_mode: ignore directive ensures that any errors encountered during filtering are ignored, allowing the pipeline to continue running smoothly. The traces section defines the filtering rules, specifically targeting spans named /_healthz for exclusion.
filter/health:# Defines a filter processorerror_mode:ignore # Ignore errorstraces:# Filtering rules for tracesspan:# Exclude spans named "/_healthz"- 'name == "/_healthz"'
Add the filter processor to the traces pipeline: Include the filter/health processor in the traces pipeline. For optimal performance, place the filter as early as possible—right after the memory_limiter and before the batch processor. Here’s how the configuration should look:
traces:receivers:- otlpprocessors:- memory_limiter- filter/health # Filters data based on rules- resource/add_mode- batchexporters:- debug- file/traces
This setup ensures that health check-related spans (/_healthz) are filtered out early in the pipeline, reducing unnecessary noise in your telemetry data.
Validate the agent configuration using otelbin.io. For reference, the traces: section of your pipelines will look similar to this:
To test your configuration, you’ll need to generate some trace data that includes a span named "/_healthz".
Exercise
Start the Gateway: In your Gateway terminal window start the gateway.
./otelcol --config ./gateway.yaml
Start the Agent: In your Agent terminal window start the agent.
./otelcol --config ./agent.yaml
Start the Loadgen: In the Spans terminal window run the loadgen with the flag to also send healthz spans along with base spans:
../loadgen -health -count 5
Verify agent.out: Using jq confirm the name of the spans received by the agent:
jq -c '.resourceSpans[].scopeSpans[].spans[] | "Span \(input_line_number) found with name \(.name)"' ./agent.out
"Span 1 found with name /movie-validator"
"Span 2 found with name /_healthz"
"Span 3 found with name /movie-validator"
"Span 4 found with name /_healthz"
"Span 5 found with name /movie-validator"
"Span 6 found with name /_healthz"
"Span 7 found with name /movie-validator"
"Span 8 found with name /_healthz"
"Span 9 found with name /movie-validator"
"Span 10 found with name /_healthz"
Check the Gateway Debug output: Using jq confirm the name of the spans received by the gateway:
jq -c '.resourceSpans[].scopeSpans[].spans[] | "Span \(input_line_number) found with name \(.name)"' ./gateway-traces.out
The gateway-metrics.out file will not contain any spans named /_healthz.
"Span 1 found with name /movie-validator"
"Span 2 found with name /movie-validator"
"Span 3 found with name /movie-validator"
"Span 4 found with name /movie-validator"
"Span 5 found with name /movie-validator"
Tip
When using the Filter processor make sure you understand the look of your incoming data and test the configuration thoroughly. In general, use as specific a configuration as possible to lower the risk of the wrong data being dropped.
You can further extend this configuration to filter out spans based on different attributes, tags, or other criteria, making the OpenTelemetry Collector more customizable and efficient for your observability needs.
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
6. Redacting Sensitive Data
10 minutes
In this section, you’ll learn how to configure the OpenTelemetry Collector to remove specific tags and redact sensitive data from telemetry spans. This is crucial for protecting sensitive information such as credit card numbers, personal data, or other security-related details that must be anonymized before being processed or exported.
We’ll walk through configuring key processors in the OpenTelemetry Collector, including:
Redaction Processor: Ensures sensitive data is sanitized before being stored or transmitted.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 6-sensitive-data.
Next, copy *.yaml from the 5-dropping-spans directory into 6-sensitive-data.
Important
Change ALL terminal windows to the [WORKSHOP]/6-sensitive-data directory.
Your updated directory structure will now look like this:
.
├── agent.yaml
└── gateway.yaml
Subsections of 6. Sensitive Data
6.1 Configuration
In this step, we’ll modify agent.yaml to include the attributes and redaction processors. These processors will help ensure that sensitive data within span attributes is properly handled before being logged or exported.
Previously, you may have noticed that some span attributes displayed in the console contained personal and sensitive data. We’ll now configure the necessary processors to filter out and redact this information effectively.
Switch to your Agent terminal window and open the agent.yaml file in your editor. We’ll add two processors to enhance the security and privacy of your telemetry data: the Attributes Processor and the Redaction Processor.
Add an attributes Processor: The Attributes Processor allows you to modify span attributes (tags) by updating, deleting, or hashing their values. This is particularly useful for obfuscating sensitive information before it is exported.
In this step, we’ll:
Update the user.phone_number attribute to a static value ("UNKNOWN NUMBER").
Hash the user.email attribute to ensure the original email is not exposed.
Delete the user.password attribute to remove it entirely from the span.
attributes/update:actions:# Actions- key:user.phone_number # Target keyaction:update # Update actionvalue:"UNKNOWN NUMBER"# New value- key:user.email # Target keyaction:hash # Hash the email value- key:user.password # Target keyaction:delete # Delete the password
Add a redaction Processor: The The Redaction Processor detects and redacts sensitive data in span attributes based on predefined patterns, such as credit card numbers or other personally identifiable information (PII).
In this step:
We set allow_all_keys: true to ensure all attributes are processed (if set to false, only explicitly allowed keys are retained).
We define blocked_values with regular expressions to detect and redact Visa and MasterCard credit card numbers.
The summary: debug option logs detailed information about the redaction process for debugging purposes.
redaction/redact:allow_all_keys:true# If false, only allowed keys will be retainedblocked_values:# List of regex patterns to block- '\b4[0-9]{3}[\s-]?[0-9]{4}[\s-]?[0-9]{4}[\s-]?[0-9]{4}\b'# Visa- '\b5[1-5][0-9]{2}[\s-]?[0-9]{4}[\s-]?[0-9]{4}[\s-]?[0-9]{4}\b'# MasterCardsummary:debug # Show debug details about redaction
Update the traces Pipeline: Integrate both processors into the traces pipeline. Make sure that you comment out the redaction processor at first (we will enable it later in a separate exercise):
In this exercise, we will delete the user.account_password, update the user.phone_numberattribute and hash the user.email in the span data before it is exported by the agent.
Exercise
Start the Gateway: In your Gateway terminal window start the gateway.
../otelcol --config=gateway.yaml
Start the Agent: In your Agent terminal window start the agent.
../otelcol --config=agent.yaml
Start the Load Generator: In the Spans terminal window start the loadgen:
../loadgen -count 1
Check the debug output: For both the agent and gateway debug output, confirm that user.account_password has been removed, and both user.phone_number & user.email have been updated.
Check file output: Using jq validate that user.account_password has been removed, and user.phone_number & user.email have been updated in gateway-taces.out:
jq '.resourceSpans[].scopeSpans[].spans[].attributes[] | select(.key == "user.password" or .key == "user.phone_number" or .key == "user.email") | {key: .key, value: .value.stringValue}' ./gateway-traces.out
Notice that the user.account_password has been removed, and the user.phone_number & user.email have been updated:
Start the Agent: In your Agent terminal window start the agent.
../otelcol --config=agent.yaml
Start the Load Generator: In the Spans terminal window start the loadgen:
../loadgen -count 1
Check the debug output: For both the agent and gateway confirm the values for user.visa & user.mastercard have been updated. Notice user.amex attribute value was NOT redacted because a matching regex pattern was not added to blocked_values
By including summary:debug in the redaction processor, the debug output will include summary information about which matching keys values were redacted, along with the count of values that were masked.
These are just a few examples of how attributes and redaction processors can be configured to protect sensitive data.
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
7. Transform Data
10 minutes
The Transform Processor lets you modify telemetry data—logs, metrics, and traces—as it flows through the pipeline. Using the OpenTelemetry Transformation Language (OTTL), you can filter, enrich, and transform data on the fly without touching your application code.
In this exercise we’ll update agent.yaml to include a Transform Processor that will:
Filter log resource attributes.
Parse JSON structured log data into attributes.
Set log severity levels based on the log message body.
You may have noticed that in previous logs, fields like SeverityText and SeverityNumber were undefined (this is typical of the filelog receiver). However, the severity is embedded within the log body:
<snip>
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(2025-01-31 15:49:29 [WARN] - Do or do not, there is no try.)
</snip>
Logs often contain structured data encoded as JSON within the log body. Extracting these fields into attributes allows for better indexing, filtering, and querying. Instead of manually parsing JSON in downstream systems, OTTL enables automatic transformation at the telemetry pipeline level.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 7-transform-data.
Next, copy *.yaml from the 6-sensitve-data directory into 7-transform-data.
Important
Change ALL terminal windows to the [WORKSHOP]/7-transform-data directory.
Your updated directory structure will now look like this:
.
├── agent.yaml
└── gateway.yaml
Subsections of 7. Transform Data
7.1 Configuration
Exercise
Add a transform processor: Switch to your Agent terminal window and edit the agent.yaml and add the following transform processor:
transform/logs:# Processor Type/Namelog_statements:# Log Processing Statements- context:resource # Log Contextstatements:# List of attribute keys to keep- keep_keys(attributes, ["com.splunk.sourcetype", "host.name", "otelcol.service.mode"])
By using the -context: resource key we are targeting the resourceLog attributes of logs.
This configuration ensures that only the relevant resource attributes (com.splunk.sourcetype, host.name, otelcol.service.mode) are retained, improving log efficiency and reducing unnecessary metadata.
Adding a Context Block for Log Severity Mapping: To properly set the severity_text and severity_number fields of a log record, we add a log context block within log_statements. This configuration extracts the level value from the log body, maps it to severity_text, and assigns the corresponding severity_number based on the log level:
- context:log # Log Contextstatements:# Transform Statements Array- set(cache, ParseJSON(body)) where IsMatch(body, "^\\{") # Parse JSON log body into a cache object- flatten(cache, "") # Flatten nested JSON structure- merge_maps(attributes, cache, "upsert") # Merge cache into attributes, updating existing keys- set(severity_text, attributes["level"]) # Set severity_text from the "level" attribute- set(severity_number, 1) where severity_text == "TRACE" # Map severity_text to severity_number- set(severity_number, 5) where severity_text == "DEBUG"- set(severity_number, 9) where severity_text == "INFO"- set(severity_number, 13) where severity_text == "WARN"- set(severity_number, 17) where severity_text == "ERROR"- set(severity_number, 21) where severity_text == "FATAL"
The merge_maps function is used to combine two maps (dictionaries) into one. In this case, it merges the cache object (containing parsed JSON data from the log body) into the attributes map.
Parameters:
attributes: The target map where the data will be merged.
cache: The source map containing the parsed JSON data.
"upsert": This mode ensures that if a key already exists in the attributes map, its value will be updated with the value from cache. If the key does not exist, it will be inserted.
This step is crucial because it ensures that all relevant fields from the log body (e.g., level, message, etc.) are added to the attributes map, making them available for further processing or exporting.
Summary of Key Transformations:
Parse JSON: Extracts structured data from the log body.
Flatten JSON: Converts nested JSON objects into a flat structure.
Merge Attributes: Integrates extracted data into log attributes.
Map Severity Text: Assigns severity_text from the log’s level attribute.
Assign Severity Numbers: Converts severity levels into standardized numerical values.
You should have a singletransform processor containing two context blocks: one for resource and one for log.
This configuration ensures that log severity is correctly extracted, standardized, and structured for efficient processing.
Tip
This method of mapping all JSON fields to top-level attributes should only be used for testing and debugging OTTL. It will result in high cardinality in a production scenario.
Update the logs pipeline: Add the transform/logs: processor into the logs: pipeline:
Start the Load Generator: Open the Logs terminal window and run the loadgen.
Important
To ensure the logs are structured in JSON format, include the -json flag when starting the script.
../loadgen -logs -json -count 5
The loadgen will write 5 log lines to ./quotes.log in JSON format.
7.3 Test Transform Processor
This test verifies that the com.splunk/source and os.type metadata have been removed from the log resource attributes before being exported by the agent. Additionally, the test ensures that:
The log body is parsed to extract severity information.
SeverityText and SeverityNumber are set on the LogRecord.
JSON fields from the log body are promoted to log attributes.
This ensures proper metadata filtering, severity mapping, and structured log enrichment before export.
Exercise
Check the debug output: For both the agent and gateway confirm that com.splunk/source and os.type have been removed:
For both the agent and gateway confirm that SeverityText and SeverityNumber in the LogRecord is now defined with the severity level from the log body. Confirm that the JSON fields from the body can be accessed as top-level log Attributes:
[{"severityText":"DEBUG","severityNumber":5,"body":"{\"level\":\"DEBUG\",\"message\":\"All we have to decide is what to do with the time that is given us.\",\"movie\":\"LOTR\",\"timestamp\":\"2025-03-07 11:56:29\"}"},{"severityText":"WARN","severityNumber":13,"body":"{\"level\":\"WARN\",\"message\":\"The Force will be with you. Always.\",\"movie\":\"SW\",\"timestamp\":\"2025-03-07 11:56:29\"}"},{"severityText":"ERROR","severityNumber":17,"body":"{\"level\":\"ERROR\",\"message\":\"One does not simply walk into Mordor.\",\"movie\":\"LOTR\",\"timestamp\":\"2025-03-07 11:56:29\"}"},{"severityText":"DEBUG","severityNumber":5,"body":"{\"level\":\"DEBUG\",\"message\":\"Do or do not, there is no try.\",\"movie\":\"SW\",\"timestamp\":\"2025-03-07 11:56:29\"}"}][{"severityText":"ERROR","severityNumber":17,"body":"{\"level\":\"ERROR\",\"message\":\"There is some good in this world, and it's worth fighting for.\",\"movie\":\"LOTR\",\"timestamp\":\"2025-03-07 11:56:29\"}"}]
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
8. Routing Data
10 minutes
The Routing Connector in OpenTelemetry is a powerful feature that allows you to direct data (traces, metrics, or logs) to different pipelines based on specific criteria. This is especially useful in scenarios where you want to apply different processing or exporting logic to subsets of your telemetry data.
For example, you might want to send production data to one exporter while directing test or development data to another. Similarly, you could route certain spans based on their attributes, such as service name, environment, or span name, to apply custom processing or storage logic.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 8-routing-data.
Next, copy *.yaml from the 7-transform-data directory into 8-routing-data.
Change all terminal windows to the [WORKSHOP]/8-routing-data directory.
Your updated directory structure will now look like this:
.
├── agent.yaml
└── gateway.yaml
Next, we will configure the routing connector and the respective pipelines.
Subsections of 8. Routing Data
8.1 Configure the Routing Connector
In this exercise, you will configure the Routing Connector in the gateway.yaml file. This setup enables the gateway to route traces based on the deployment.environment attribute in the spans you send. By implementing this, you can process and handle traces differently depending on their attributes.
Exercise
In OpenTelemetry configuration files, connectors have their own dedicated section, similar to receivers and processors.
Add the routing connector:
In the Gateway terminal window edit gateway.yaml and add the following below the connectors: section:
routing:default_pipelines:[traces/standard]# Default pipeline if no rule matcheserror_mode:ignore # Ignore errors in routingtable:# Define routing rules# Routes spans to a target pipeline if the resourceSpan attribute matches the rule- statement:route() where attributes["deployment.environment"] == "security-applications"pipelines:[traces/security] # Target pipeline
The rules above apply to traces, but this approach also applies to metrics and logs, allowing them to be routed based on attributes in resourceMetrics or resourceLogs.
Configure file: exporters: The routing connector requires separate targets for routing. Add two file exporters, file/traces/security and file/traces/standard, to ensure data is directed correctly.
file/traces/standard:# Exporter for regular tracespath:"./gateway-traces-standard.out"# Path for saving trace dataappend:false# Overwrite the file each timefile/traces/security:# Exporter for security tracespath:"./gateway-traces-security.out"# Path for saving trace dataappend:false# Overwrite the file each time
With the routing configuration complete, the next step is to configure the pipelines to apply these routing rules.
8.2 Configuring the Pipelines
Exercise
Update the traces pipeline to use routing:
To enable routing, update the original traces: pipeline by using routing as the only exporter. This ensures all span data is sent through the routing connector for evaluation.
Remove all processors and replace it with an empty array ([]). These are now defined in the traces/standard and traces/security pipelines.
Add both the standard and security traces pipelines:
Configure the Security pipeline: This pipeline will handle all spans that match the routing rule for security.
This uses routing as its receiver. Place it below the existing traces: pipeline:
traces/security:# New Security Traces/Spans Pipelinereceivers:- routing # Receive data from the routing connectorprocessors:- memory_limiter # Memory Limiter Processor- resource/add_mode # Adds collector mode metadata- batchexporters:- debug # Debug Exporter - file/traces/security # File Exporter for spans matching rule
Add the Standard pipeline: This pipeline processes all spans that do not match the routing rule.
This pipeline is also using routing as its receiver. Add this below the traces/security one:
traces/standard:# Default pipeline for unmatched spansreceivers:- routing # Receive data from the routing connectorprocessors:- memory_limiter # Memory Limiter Processor- resource/add_mode # Adds collector mode metadata- batchexporters:- debug # Debug exporter- file/traces/standard # File exporter for unmatched spans
Validate the agent configuration using otelbin.io. For reference, the traces: section of your pipelines will look similar to this:
In this section, we will test the routing rule configured for the Gateway. The expected result is that thespan generated by the loadgen will be sent to the gateway-traces-security.out file.
Start the Gateway: In your Gateway terminal window start the gateway.
../otelcol --config gateway.yaml
Start the Agent: In your Agent terminal window start the agent.
../otelcol --config agent.yaml
Send a Regular Span: In the Spans terminal window send a regular span using the loadgen:
../loadgen -count 1
Both the agent and gateway will display debug information. The gateway will also generate a new gateway-traces-standard.out file, as this is now the designated destination for regular spans.
Tip
If you check gateway-traces-standard.out, it will contain the span sent by loadgen. You will also see an empty gateway-traces-security.out file, as the routing configuration creates output files immediately, even if no matching spans have been processed yet.
Send a Security Span: In the Spans terminal window send a security span using the security flag:
../loadgen -security -count 1
Again, both the agent and gateway should display debug information, including the span you just sent. This time, the gateway will write a line to the gateway-traces-security.out file, which is designated for spans where the deployment.environment resource attribute matches "security-applications".
The gateway-traces-standard.out should be unchanged.
You can repeat this scenario multiple times, and each trace will be written to its corresponding output file.
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
Conclusion
In this section, we successfully tested the routing connector in the gateway by sending different spans and verifying their destinations.
Regular spans were correctly routed to gateway-traces-standard.out, confirming that spans without a matching deployment.environment attribute follow the default pipeline.
Security-related spans were routed to gateway-traces-security.out, demonstrating that the routing rule based on "deployment.environment": "security-applications" works as expected.
By inspecting the output files, we confirmed that the OpenTelemetry Collector correctly evaluates span attributes and routes them to the appropriate destinations. This validates that routing rules can effectively separate and direct telemetry data for different use cases.
You can now extend this approach by defining additional routing rules to further categorize spans, metrics, and logs based on different attributes.
Create metrics with Count Connector
10 minutes
In this section, we’ll explore how to use the Count Connector to extract attribute values from logs and convert them into meaningful metrics.
Specifically, we’ll use the Count Connector to track the number of “Star Wars” and “Lord of the Rings” quotes appearing in our logs, turning them into measurable data points.
Exercise
Inside the [WORKSHOP] directory, create a new subdirectory named 9-sum-count.
Next, copy all contents from the 8-routing-data directory into 9-sum-count.
After copying, remove any *.out and *.log files.
Change all terminal windows to the [WORKSHOP]/9-sum-count directory.
.
├── agent.yaml
└── gateway.yaml
Update the agent.yaml to change the frequency that we read logs.
Find the filelog/quotes receiver in the agent.yaml and add a poll_interval attribute:
filelog/quotes:# Receiver Type/Namepoll_interval:10s # Only read every ten seconds
The reason for the delay is that the Count Connector in the OpenTelemetry Collector counts logs only within each processing interval. This means that every time the data is read, the count resets to zero for the next interval. With the default Filelog reciever interval of 200ms it reads every line the loadgen writes, giving us counts of 1. With this interval we make sure we have multiple entries to count.
The Collector can maintain a running count for each read interval by omitting conditions, as shown below. However, it’s best practice to let your backend handle running counts since it can track them over a longer time period.
Exercise
Add the Count Connector
Include the Count Connector in the connector’s section of your configuration and define the metrics counters we want to use:
connectors:count:logs:logs.full.count:description:"Running count of all logs read in interval"logs.sw.count:description:"StarWarsCount"conditions:- attributes["movie"] == "SW"logs.lotr.count:description:"LOTRCount"conditions:- attributes["movie"] == "LOTR"logs.error.count:description:"ErrorCount"conditions:- attributes["level"] == "ERROR"
Explanation of the Metrics Counters
logs.full.count: Tracks the total number of logs processed during each read interval. Since this metric has no filtering conditions, every log that passes through the system is included in the count.
logs.sw.count Counts logs that contain a quote from a Star Wars movie.
logs.lotr.count: Counts logs that contain a quote from a Lord of the Rings movie.
logs.error.count: Represents a real-world scenario by counting logs with a severity level of ERROR for the read interval.
Configure the Count Connector in the pipelines In the pipeline configuration below, the connector exporter is added to the logs section, while the connector receiver is added to the metrics section.
We count logs based on their attributes. If your log data is stored in the log body instead of attributes, you’ll need to use a Transform processor in your pipeline to extract key/value pairs and add them as attributes.
In this workshop, we’ve already added merge_maps(attributes, cache, "upsert") in the 07-transform section. This ensures that all relevant data is included in the log attributes for processing.
When selecting fields to create attributes from, be mindful—adding all fields indiscriminately is generally not ideal for production environments. Instead, choose only the fields that are truly necessary to avoid unnecessary data clutter.
Exercise
Validate the agent configuration using otelbin.io. For reference, the logs and metrics: sections of your pipelines will look like this:
Start the Gateway: In the Gateway terminal window navigate to the [WORKSHOP]/9-sum-count directory and run:
../otelcol --config=gateway.yaml
Start the Agent: In the Agent terminal window navigate to the [WORKSHOP]/9-sum-count directory and run:
../otelcol --config=agent.yaml
Send 12 Logs lines with the Loadgen: In the Spans terminal window navigate to the [WORKSHOP]/9-sum-count directory. Send 12 log lines, they should be read in two intervals. Do this with the following loadgen command:
../loadgen -logs -json -count 12
Both the agent and gateway will display debug information, showing they are processing data. Wait until the loadgen completes.
Verify metrics have been generated As the logs are processed, the Agent generates metrics and forwards them to the Gateway, which then writes them to gateway-metrics.out.
To check if the metrics logs.full.count, logs.sw.count, logs.lotr.count, and logs.error.count are present in the output, run the following jq query:
jq '.resourceMetrics[].scopeMetrics[].metrics[]
| select(.name == "logs.full.count" or .name == "logs.sw.count" or .name == "logs.lotr.count" or .name == "logs.error.count")
| {name: .name, value: (.sum.dataPoints[0].asInt // "-")}' gateway-metrics.out
Note: the logs.full.count normally is equal to logs.sw.count + logs.lotr.count, while the logs.error.count will be a random number.
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.
Create metrics with Sum Connector
10 minutes
In this section, we’ll explore how the Sum Connector can extract values from spans and convert them into metrics.
We’ll specifically use the credit card charges from our base spans and leverage the Sum Connector to retrieve the total charges as a metric.
The connector can be used to collect (sum) attribute values from spans, span events, metrics, data points, and log records. It captures each individual value, transforms it into a metric, and passes it along. However, it’s the backend’s job to use these metrics and their attributes for calculations and further processing.
Exercise
Switch to your Agent terminal window and open the agent.yaml file in your editor.
Add the Sum Connector Include the Sum Connector in the connectors section of your configuration and define the metrics counters:
In the example above, we check for the payment.amount attribute in spans. If it has a valid value, the Sum connector generates a metric called user.card-charge and includes the user.name as an attribute. This enables the backend to track and display a user’s total charges over an extended period, such as a billing cycle.
In the pipeline configuration below, the connector exporter is added to the traces section, while the connector receiver is added to the metrics section.
Start the Gateway: In the Gateway terminal window navigate to the [WORKSHOP]/9-sum-count directory and run:
../otelcol --config=gateway.yaml
Start the Agent: In the Agent terminal window navigate to the [WORKSHOP]/9-sum-count directory and run:
../otelcol --config=agent.yaml
Start the Loadgen: In the Spans terminal window navigate to the [WORKSHOP]/9-sum-count directory. Send 8 spans with the following loadgen command:
../loadgen -count 8
Both the agent and gateway will display debug information, showing they are processing data. Wait until the loadgen completes.
Verify that metrics While processing the span, the Agent, has generated metrics and passed them on to the Gateway. The Gateway has written them into gateway-metrics.out.
To confirm the presence of user.card-chargeand verify each one has an attribute user.name in the metrics output, run the following jq query:
George Lucas 67.49
Frodo Baggins 87.14
Thorin Oakenshield 90.98
Luke Skywalker 51.37
Luke Skywalker 65.56
Thorin Oakenshield 67.5
Thorin Oakenshield 66.66
Peter Jackson 94.39
Important
Stop the agent and the gateway processes by pressing Ctrl-C in their respective terminals.