In this workshop, you’ll get hands-on experience with the following:
Practice deploying the collector and instrumenting a .NET application with the Splunk distribution of OpenTelemetry .NET in Linux and Kubernetes environments.
Practice “dockerizing” a .NET application, running it in Docker, and then adding Splunk OpenTelemetry instrumentation.
Practice deploying the Splunk distro of the collector in a K8s environment using Helm. Then customize the collector config and troubleshoot an issue.
The workshop uses a simple .NET application to illustrate these concepts. Let’s get started!
Tip
The easiest way to navigate through this workshop is by using:
the left/right arrows (< | >) on the top right of this page
the left (◀️) and right (▶️) cursor keys on your keyboard
Subsections of Hands-On OpenTelemetry, Docker, and K8s
Connect to EC2 Instance
5 minutes
Connect to your EC2 Instance
We’ve prepared an Ubuntu Linux instance in AWS/EC2 for each attendee.
Using the IP address and password provided by your instructor, connect to your EC2 instance
using one of the methods below:
Mac OS / Linux
ssh splunk@IP address
Windows 10+
Use the OpenSSH client
Earlier versions of Windows
Use Putty
Deploy the OpenTelemetry Collector
10 minutes
Uninstall the OpenTelemetry Collector
Our EC2 instance may already have an older version the Splunk Distribution of the OpenTelemetry Collector
installed. Before proceeding further, let’s uninstall it using the following command:
curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh;sudo sh /tmp/splunk-otel-collector.sh --uninstall
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be REMOVED:
splunk-otel-collector*
0 upgraded, 0 newly installed, 1 to remove and 167 not upgraded.
After this operation, 766 MB disk space will be freed.
(Reading database ... 157441 files and directories currently installed.)Removing splunk-otel-collector (0.92.0) ...
(Reading database ... 147373 files and directories currently installed.)Purging configuration files for splunk-otel-collector (0.92.0) ...
Scanning processes...
Scanning candidates...
Scanning linux images...
Running kernel seems to be up-to-date.
Restarting services...
systemctl restart fail2ban.service falcon-sensor.service
Service restarts being deferred:
systemctl restart networkd-dispatcher.service
systemctl restart unattended-upgrades.service
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
Successfully removed the splunk-otel-collector package
Deploy the OpenTelemetry Collector
Let’s deploy the latest version of the Splunk Distribution of the OpenTelemetry Collector on our Linux EC2 instance.
We can do this by downloading the collector binary using curl, and then running it with specific arguments that tell the collector which realm to report data into, which access
token to use, and which deployment environment to report into.
A deployment environment in Splunk Observability Cloud is a distinct deployment of your system
or application that allows you to set up configurations that don’t overlap with configurations
in other deployments of the same application.
Dec 20 00:13:14 derek-1 systemd[1]: Started Splunk OpenTelemetry Collector.
Dec 20 00:13:14 derek-1 otelcol[14465]: 2024/12/20 00:13:14 settings.go:483: Set config to /etc/otel/collector/agent_config.yaml
Dec 20 00:13:14 derek-1 otelcol[14465]: 2024/12/20 00:13:14 settings.go:539: Set memory limit to 460 MiB
Dec 20 00:13:14 derek-1 otelcol[14465]: 2024/12/20 00:13:14 settings.go:524: Set soft memory limit set to 460 MiB
Dec 20 00:13:14 derek-1 otelcol[14465]: 2024/12/20 00:13:14 settings.go:373: Set garbage collection target percentage (GOGC) to 400Dec 20 00:13:14 derek-1 otelcol[14465]: 2024/12/20 00:13:14 settings.go:414: set"SPLUNK_LISTEN_INTERFACE" to "127.0.0.1"etc.
Collector Configuration
Where do we find the configuration that is used by this collector?
It’s available in the /etc/otel/collector directory. Since we installed the
collector in agent mode, the collector configuration can be found in the
agent_config.yaml file.
Deploy a .NET Application
10 minutes
Prerequisites
Before deploying the application, we’ll need to install the .NET 8 SDK on our instance.
Hit:1 http://us-west-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://us-west-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://us-west-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
Ign:5 https://splunk.jfrog.io/splunk/otel-collector-deb release InRelease
Hit:6 https://splunk.jfrog.io/splunk/otel-collector-deb release Release
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
aspnetcore-runtime-8.0 aspnetcore-targeting-pack-8.0 dotnet-apphost-pack-8.0 dotnet-host-8.0 dotnet-hostfxr-8.0 dotnet-runtime-8.0 dotnet-targeting-pack-8.0 dotnet-templates-8.0 liblttng-ust-common1
liblttng-ust-ctl5 liblttng-ust1 netstandard-targeting-pack-2.1-8.0
The following NEW packages will be installed:
aspnetcore-runtime-8.0 aspnetcore-targeting-pack-8.0 dotnet-apphost-pack-8.0 dotnet-host-8.0 dotnet-hostfxr-8.0 dotnet-runtime-8.0 dotnet-sdk-8.0 dotnet-targeting-pack-8.0 dotnet-templates-8.0
liblttng-ust-common1 liblttng-ust-ctl5 liblttng-ust1 netstandard-targeting-pack-2.1-8.0
0 upgraded, 13 newly installed, 0 to remove and 0 not upgraded.
Need to get 138 MB of archives.
After this operation, 495 MB of additional disk space will be used.
etc.
We can build the application using the following command:
dotnet build
MSBuild version 17.8.5+b5265ef37 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
helloworld -> /home/splunk/workshop/docker-k8s-otel/helloworld/bin/Debug/net8.0/helloworld.dll
Build succeeded.
0 Warning(s)0 Error(s)Time Elapsed 00:00:02.04
If that’s successful, we can run it as follows:
dotnet run
Building...
info: Microsoft.Hosting.Lifetime[14] Now listening on: http://localhost:8080
info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0] Content root path: /home/splunk/workshop/docker-k8s-otel/helloworld
Once it’s running, open a second SSH terminal to your Ubuntu instance and access the application using curl:
curl http://localhost:8080/hello
Hello, World!
You can also pass in your name:
curl http://localhost:8080/hello/Tom
Hello, Tom!
Press Ctrl + C to quit your Helloworld app before moving to the next step.
Next Steps
What are the three methods that we can use to instrument our application with OpenTelemetry?
How can we see what traces are being exported by the .NET application from our Linux instance?
Click here to see the answer
There are two ways we can do this:
We could add OTEL_TRACES_EXPORTER=otlp,console at the start of the dotnet run command, which ensures that traces are both written to collector via OTLP as well as the console.
OTEL_TRACES_EXPORTER=otlp,console dotnet run
Alternatively, we could add the debug exporter to the collector configuration, and add it to the traces pipeline, which ensures the traces are written to the collector logs.
View your application in Splunk Observability Cloud
Now that the setup is complete, let’s confirm that traces are sent to Splunk Observability Cloud. Note that when the application is deployed for the first time, it may take a few minutes for the data to appear.
Navigate to APM, then use the Environment dropdown to select your environment (i.e. otel-instancename).
If everything was deployed correctly, you should see helloworld displayed in the list of services:
Click on Service Map on the right-hand side to view the service map.
Next, click on Traces on the right-hand side to see the traces captured for this application.
An individual trace should look like the following:
Press Ctrl + C to quit your Helloworld app before moving to the next step.
Dockerize the Application
15 minutes
Later on in this workshop, we’re going to deploy our .NET application into a Kubernetes cluster.
But how do we do that?
The first step is to create a Docker image for our application. This is known as
“dockerizing” and application, and the process begins with the creation of a Dockerfile.
But first, let’s define some key terms.
Key Terms
What is Docker?
“Docker provides the ability to package and run an application in a loosely isolated environment
called a container. The isolation and security lets you run many containers simultaneously on
a given host. Containers are lightweight and contain everything needed to run the application,
so you don’t need to rely on what’s installed on the host.”
“Containers are isolated processes for each of your app’s components. Each component
…runs in its own isolated environment,
completely isolated from everything else on your machine.”
“A container image is a standardized package that includes all of the files, binaries,
libraries, and configurations to run a container.”
Dockerfile
“A Dockerfile is a text-based document that’s used to create a container image. It provides
instructions to the image builder on the commands to run, files to copy, startup command, and more.”
Create a Dockerfile
Let’s create a file named Dockerfile in the /home/splunk/workshop/docker-k8s-otel/helloworld directory.
cd /home/splunk/workshop/docker-k8s-otel/helloworld
You can use vi or nano to create the file. We will show an example using vi:
vi Dockerfile
Copy and paste the following content into the newly opened file:
Press ‘i’ to enter into insert mode in vi before pasting the text below.
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS baseUSER appWORKDIR /appEXPOSE 8080FROM mcr.microsoft.com/dotnet/sdk:8.0 AS buildARGBUILD_CONFIGURATION=Release
WORKDIR /srcCOPY["helloworld.csproj", "helloworld/"]RUN dotnet restore "./helloworld/./helloworld.csproj"WORKDIR "/src/helloworld"COPY . .RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/buildFROM build AS publishARGBUILD_CONFIGURATION=Release
RUN dotnet publish "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=falseFROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT["dotnet","helloworld.dll"]
To save your changes in vi, press the esc key to enter command mode, then type :wq! followed by pressing the enter/return key.
What does all this mean? Let’s break it down.
Walking through the Dockerfile
We’ve used a multi-stage Dockerfile for this example, which separates the Docker image creation process into the following stages:
Base
Build
Publish
Final
While a multi-stage approach is more complex, it allows us to create a
lighter-weight runtime image for deployment. We’ll explain the purpose of
each of these stages below.
The Base Stage
The base stage defines the user that will
be running the app, the working directory, and exposes
the port that will be used to access the app.
It’s based off of Microsoft’s mcr.microsoft.com/dotnet/aspnet:8.0 image:
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS baseUSER appWORKDIR /appEXPOSE 8080
Note that the mcr.microsoft.com/dotnet/aspnet:8.0 image includes the .NET runtime only,
rather than the SDK, so is relatively lightweight. It’s based off of the Debian 12 Linux
distribution. You can find more information about the ASP.NET Core Runtime Docker images
in GitHub.
The Build Stage
The next stage of the Dockerfile is the build stage. For this stage, the
mcr.microsoft.com/dotnet/sdk:8.0 image is used, which is also based off of
Debian 12 but includes the full .NET SDK rather than just the runtime.
This stage copies the .csproj file to the build image, and then uses dotnet restore to
download any dependencies used by the application.
It then copies the application code to the build image and
uses dotnet build to build the project and its dependencies into a
set of .dll binaries:
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS buildARGBUILD_CONFIGURATION=Release
WORKDIR /srcCOPY["helloworld.csproj", "helloworld/"]RUN dotnet restore "./helloworld/./helloworld.csproj"WORKDIR "/src/helloworld"COPY . .RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/build
The Publish Stage
The third stage is publish, which is based on build stage image rather than a Microsoft image. In this stage, dotnet publish is used to
package the application and its dependencies for deployment:
FROM build AS publishARGBUILD_CONFIGURATION=Release
RUN dotnet publish "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
The Final Stage
The fourth stage is our final stage, which is based on the base
stage image (which is lighter-weight than the build and publish stages). It copies the output from the publish stage image and
defines the entry point for our application:
FROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT["dotnet","helloworld.dll"]
Build a Docker Image
Now that we have the Dockerfile, we can use it to build a Docker image containing
our application:
docker build -t helloworld:1.0 .
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 281.1kB
Step 1/19 : FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
8.0: Pulling from dotnet/aspnet
af302e5c37e9: Pull complete91ab5e0aabf0: Pull complete1c1e4530721e: Pull complete1f39ca6dcc3a: Pull completeea20083aa801: Pull complete64c242a4f561: Pull completeDigest: sha256:587c1dd115e4d6707ff656d30ace5da9f49cec48e627a40bbe5d5b249adc3549
Status: Downloaded newer image for mcr.microsoft.com/dotnet/aspnet:8.0
---> 0ee5d7ddbc3b
Step 2/19 : USER app
etc,
This tells Docker to build an image using a tag of helloworld:1.0 using the Dockerfile in the current directory.
We can confirm it was created successfully with the following command:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloworld 1.0 db19077b9445 20 seconds ago 217MB
Test the Docker Image
Before proceeding, ensure the application we started before is no longer running on your instance.
We can run our application using the Docker image as follows:
Note: we’ve included the --network=host parameter to ensure our Docker container
is able to access resources on our instance, which is important later on when we need
our application to send data to the collector running on localhost.
Let’s ensure that our Docker container is running:
docker ps | grep helloworld
$ docker ps | grep helloworld
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f5b9cd56ac5 helloworld:1.0 "dotnet helloworld.d…"2 mins ago Up 2 mins helloworld
We can access our application as before:
curl http://localhost:8080/hello/Docker
Hello, Docker!
Congratulations, if you’ve made it this far, you’ve successfully Dockerized a .NET application.
Add Instrumentation to Dockerfile
10 minutes
Now that we’ve successfully Dockerized our application, let’s add in OpenTelemetry instrumentation.
This is similar to the steps we took when instrumenting the application running on Linux, but there
are some key differences to be aware of.
Update the Dockerfile
Let’s update the Dockerfile in the /home/splunk/workshop/docker-k8s-otel/helloworld directory.
After the .NET application is built in the Dockerfile, we want to:
Add dependencies needed to download and execute splunk-otel-dotnet-install.sh
Download the Splunk OTel .NET installer
Install the distribution
We can add the following to the build stage of the Dockerfile. Let’s open the Dockerfile in vi:
vi /home/splunk/workshop/docker-k8s-otel/helloworld/Dockerfile
Press the i key to enter edit mode in vi
Paste the lines marked with ‘NEW CODE’ into your Dockerfile in the build stage section:
# CODE ALREADY IN YOUR DOCKERFILE:FROM mcr.microsoft.com/dotnet/sdk:8.0 AS buildARGBUILD_CONFIGURATION=Release
WORKDIR /srcCOPY["helloworld.csproj", "helloworld/"]RUN dotnet restore "./helloworld/./helloworld.csproj"WORKDIR "/src/helloworld"COPY . .RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/build# NEW CODE: add dependencies for splunk-otel-dotnet-install.shRUN apt-get update &&\
apt-get install -y unzip# NEW CODE: download Splunk OTel .NET installerRUN curl -sSfL https://github.com/signalfx/splunk-otel-dotnet/releases/latest/download/splunk-otel-dotnet-install.sh -O# NEW CODE: install the distributionRUN sh ./splunk-otel-dotnet-install.sh
Next, we’ll update the final stage of the Dockerfile with the following changes:
Copy the /root/.splunk-otel-dotnet/ from the build image to the final image
Copy the entrypoint.sh file as well
Set the OTEL_SERVICE_NAME and OTEL_RESOURCE_ATTRIBUTES environment variables
Set the ENTRYPOINT to entrypoint.sh
It’s easiest to simply replace the entire final stage with the following:
IMPORTANT replace $INSTANCE in your Dockerfile with your instance name,
which can be determined by running echo $INSTANCE.
# CODE ALREADY IN YOUR DOCKERFILEFROM base AS final# NEW CODE: Copy instrumentation file treeWORKDIR "//home/app/.splunk-otel-dotnet"COPY --from=build /root/.splunk-otel-dotnet/ .# CODE ALREADY IN YOUR DOCKERFILEWORKDIR /appCOPY --from=publish /app/publish .# NEW CODE: copy the entrypoint.sh scriptCOPY entrypoint.sh .# NEW CODE: set OpenTelemetry environment variablesENVOTEL_SERVICE_NAME=helloworld
ENVOTEL_RESOURCE_ATTRIBUTES='deployment.environment=otel-$INSTANCE'# NEW CODE: replace the prior ENTRYPOINT command with the following two lines ENTRYPOINT["sh","entrypoint.sh"]CMD["dotnet","helloworld.dll"]
To save your changes in vi, press the esc key to enter command mode, then type :wq! followed by pressing the enter/return key.
After all of these changes, the Dockerfile should look like the following:
IMPORTANT if you’re going to copy and paste this content into your own Dockerfile,
replace $INSTANCE in your Dockerfile with your instance name,
which can be determined by running echo $INSTANCE.
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS baseUSER appWORKDIR /appEXPOSE 8080FROM mcr.microsoft.com/dotnet/sdk:8.0 AS buildARGBUILD_CONFIGURATION=Release
WORKDIR /srcCOPY["helloworld.csproj", "helloworld/"]RUN dotnet restore "./helloworld/./helloworld.csproj"WORKDIR "/src/helloworld"COPY . .RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/build# NEW CODE: add dependencies for splunk-otel-dotnet-install.shRUN apt-get update &&\
apt-get install -y unzip# NEW CODE: download Splunk OTel .NET installerRUN curl -sSfL https://github.com/signalfx/splunk-otel-dotnet/releases/latest/download/splunk-otel-dotnet-install.sh -O# NEW CODE: install the distributionRUN sh ./splunk-otel-dotnet-install.shFROM build AS publishARGBUILD_CONFIGURATION=Release
RUN dotnet publish "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=falseFROM base AS final# NEW CODE: Copy instrumentation file treeWORKDIR "//home/app/.splunk-otel-dotnet"COPY --from=build /root/.splunk-otel-dotnet/ .WORKDIR /appCOPY --from=publish /app/publish .# NEW CODE: copy the entrypoint.sh scriptCOPY entrypoint.sh .# NEW CODE: set OpenTelemetry environment variablesENVOTEL_SERVICE_NAME=helloworld
ENVOTEL_RESOURCE_ATTRIBUTES='deployment.environment=otel-$INSTANCE'# NEW CODE: replace the prior ENTRYPOINT command with the following two lines ENTRYPOINT["sh","entrypoint.sh"]CMD["dotnet","helloworld.dll"]
Create the entrypoint.sh file
We also need to create a file named entrypoint.sh in the /home/splunk/workshop/docker-k8s-otel/helloworld folder
with the following content:
vi /home/splunk/workshop/docker-k8s-otel/helloworld/entrypoint.sh
Then paste the following code into the newly created file:
#!/bin/sh
# Read in the file of environment settings. /$HOME/.splunk-otel-dotnet/instrument.sh
# Then run the CMDexec"$@"
To save your changes in vi, press the esc key to enter command mode, then type :wq! followed by pressing the enter/return key.
The entrypoint.sh script is required for sourcing environment variables from the instrument.sh script,
which is included with the instrumentation. This ensures the correct setup of environment variables
for each platform.
You may be wondering, why can’t we just include the following command in the Dockerfile to do this,
like we did when activating OpenTelemetry .NET instrumentation on our Linux host?
RUN . $HOME/.splunk-otel-dotnet/instrument.sh
The problem with this approach is that each Dockerfile RUN step runs a new container and a new shell.
If you try to set an environment variable in one shell, it will not be visible later on.
This problem is resolved by using an entry point script, as we’ve done here.
Refer to this Stack Overflow post
for further details on this issue.
Build the Docker Image
Let’s build a new Docker image that includes the OpenTelemetry .NET instrumentation:
docker build -t helloworld:1.1 .
Note: we’ve used a different version (1.1) to distinguish the image from our earlier version.
To clean up the older versions, run the following command to get the container id:
docker ps -a | grep helloworld
Then run the following command to delete the container:
docker rm <old container id> --force
Now we can get the container image id:
docker images | grep 1.0
Finally, we can run the following command to delete the old image:
IMPORTANT once these files are copied, open /home/splunk/workshop/docker-k8s-otel/helloworld/Dockerfile with an editor and replace $INSTANCE in your Dockerfile with your instance name,
which can be determined by running echo $INSTANCE.
Introduction to Part 2 of the Workshop
In the next part of the workshop, we want to run the application in Kubernetes,
so we’ll need to deploy the Splunk distribution of the OpenTelemetry Collector
in our Kubernetes cluster.
Let’s define some key terms first.
Key Terms
What is Kubernetes?
“Kubernetes is a portable, extensible, open source platform for managing containerized
workloads and services, that facilitates both declarative configuration and automation.”
NAME: splunk-otel-collector
LAST DEPLOYED: Fri Dec 20 01:01:43 2024NAMESPACE: default
STATUS: deployed
REVISION: 1TEST SUITE: None
NOTES:
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
Confirm the Collector is Running
We can confirm whether the collector is running with the following command:
kubectl get pods
NAME READY STATUS RESTARTS AGE
splunk-otel-collector-agent-dkn88 1/1 Running 0 53s
splunk-otel-collector-agent-ksmh4 1/1 Running 0 53s
splunk-otel-collector-agent-lc2lf 1/1 Running 0 53s
splunk-otel-collector-k8s-cluster-receiver-dbf64995b-xgm9b 1/1 Running 0 53s
Confirm your K8s Cluster is in O11y Cloud
In Splunk Observability Cloud, navigate to Infrastructure -> Kubernetes -> Kubernetes Clusters,
and then search for your cluster name (which is $INSTANCE-cluster):
Deploy Application to K8s
15 minutes
Update the Dockerfile
With Kubernetes, environment variables are typically managed in the .yaml manifest files rather
than baking them into the Docker image. So let’s remove the following two environment variables from the Dockerfile:
vi /home/splunk/workshop/docker-k8s-otel/helloworld/Dockerfile
Then remove the following two environment variables:
To save your changes in vi, press the esc key to enter command mode, then type :wq! followed by pressing the enter/return key.
Build a new Docker Image
Let’s build a new Docker image that excludes the environment variables:
cd /home/splunk/workshop/docker-k8s-otel/helloworld
docker build -t helloworld:1.2 .
Note: we’ve used a different version (1.2) to distinguish the image from our earlier version.
To clean up the older versions, run the following command to get the container id:
docker ps -a | grep helloworld
Then run the following command to delete the container:
docker rm <old container id> --force
Now we can get the container image id:
docker images | grep 1.1
Finally, we can run the following command to delete the old image:
docker image rm <old image id>
Import the Docker Image to Local Container Repository
Normally we’d push our Docker image to a repository such as Docker Hub.
But for this workshop, we’ll push the Docker image to the local container
repository running on our EC2 instance at localhost:9999
# Update the image tagdocker tag helloworld:1.2 localhost:9999/helloworld:1.2
# Import the image into the local repositorydocker push localhost:9999/helloworld:1.2
Deploy the .NET Application
Hint: To enter edit mode in vi, press the ‘i’ key. To save changes, press the esc key to enter command mode, then type :wq! followed by pressing the enter/return key.
To deploy our .NET application to K8s, let’s create a file named deployment.yaml in /home/splunk:
The deployment.yaml file is a kubernetes config file that is used to define a deployment resource. This file is the cornerstone of managing applications in Kubernetes! The deployment config defines the deployment’s desired state and Kubernetes then ensures the actual state matches it. This allows application pods to self-heal and also allows for easy updates or roll backs to applications.
Then, create a second file in the same directory named service.yaml:
A Service in Kubernetes is an abstraction layer, working like a middleman, giving you a fixed IP address or DNS name to access your Pods, which stays the same, even if Pods are added, removed, or replaced over time.
Then, create a third file in the same directory named ingress.yaml:
An Ingress in Kubernetes is a Kubernetes API object that manages external access to services within a cluster, typically HTTP and HTTPS traffic. It acts as a set of rules for routing incoming connections to the correct internal services and pods, handling functions like load balancing, SSL/TLS termination, and name-based virtual hosting
We can then use these manifest files to deploy our application:
cd /home/splunk
# create the deploymentkubectl apply -f deployment.yaml
# create the servicekubectl apply -f service.yaml
# create the ingresskubectl apply -f ingress.yaml
deployment.apps/helloworld created
service/helloworld created
ingress.networking.k8s.io/helloworld-ingress created
Test the Application
Use the following command to access the application:
curl http://helloworld.localhost/hello/Kubernetes
Configure OpenTelemetry
The .NET OpenTelemetry instrumentation was already baked into the Docker image. But we need to set a few
environment variables to tell it where to send the data.
Add the following to deployment.yaml file you created earlier:
IMPORTANT replace $INSTANCE in the YAML below with your instance name,
which can be determined by running echo $INSTANCE.
Then use the following command to generate some traffic:
curl http://helloworld.localhost/hello/Kubernetes
After a minute or so, you should see traces flowing in the o11y cloud. But, if you want to see your trace sooner, we have …
A Challenge For You
If you are a developer and just want to quickly grab the trace id or see console feedback, what environment variable could you add to the deployment.yaml file?
Click here to see the answer
If you recall in our challenge from Section 4, Instrument a .NET Application with OpenTelemetry, we showed you a trick to write traces to the console using the OTEL_TRACES_EXPORTER environment variable. We can add this variable to our deployment.yaml, redeploy our application, and tail the logs from our helloworld app so that we can grab the trace id to then find the trace in Splunk Observability Cloud. (In the next section of our workshop, we will also walk through using the debug exporter, which is how you would typically debug your application in a K8s environment.)
First, open the deployment.yaml file in vi:
vi deployment.yaml
Then, add the OTEL_TRACES_EXPORTER environment variable:
env:- name:PORTvalue:"8080"- name:NODE_IPvalueFrom:fieldRef:fieldPath:status.hostIP- name:OTEL_EXPORTER_OTLP_ENDPOINTvalue:"http://$(NODE_IP):4318"- name:OTEL_SERVICE_NAMEvalue:"helloworld"- name:OTEL_RESOURCE_ATTRIBUTES value:"deployment.environment=YOURINSTANCE"# NEW VALUE HERE:- name:OTEL_TRACES_EXPORTERvalue:"otlp,console"
Then, in your other terminal window, generate a trace with your curl command. You will see the trace id in the console in which you are tailing the logs. Copy the Activity.TraceId: value and paste it into the Trace search field in APM.
Customize the OpenTelemetry Collector Configuration
20 minutes
We deployed the Splunk Distribution of the OpenTelemetry Collector in our K8s cluster
using the default configuration. In this section, we’ll walk through several examples
showing how to customize the collector config.
Get the Collector Configuration
Before we customize the collector config, how do we determine what the current configuration
looks like?
In a Kubernetes environment, the collector configuration is stored using a Config Map.
We can see which config maps exist in our cluster with the following command:
kubectl get cm -l app=splunk-otel-collector
NAME DATA AGE
splunk-otel-collector-otel-k8s-cluster-receiver 1 3h37m
splunk-otel-collector-otel-agent 1 3h37m
Why are there two config maps?
We can then view the config map of the collector agent as follows:
kubectl describe cm splunk-otel-collector-otel-agent
Name: splunk-otel-collector-otel-agent
Namespace: default
Labels: app=splunk-otel-collector
app.kubernetes.io/instance=splunk-otel-collector
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=splunk-otel-collector
app.kubernetes.io/version=0.136.1
chart=splunk-otel-collector-0.136.0
helm.sh/chart=splunk-otel-collector-0.136.0
release=splunk-otel-collector
Annotations: meta.helm.sh/release-name: splunk-otel-collector
meta.helm.sh/release-namespace: default
Data====relay:
----
exporters:
otlphttp:
auth:
authenticator: headers_setter
metrics_endpoint: https://ingest.us1.signalfx.com/v2/datapoint/otlp
traces_endpoint: https://ingest.us1.signalfx.com/v2/trace/otlp
(followed by the rest of the collector config in yaml format)
How to Update the Collector Configuration in K8s
In our earlier example running the collector on a Linux instance,
the collector configuration was available in the /etc/otel/collector/agent_config.yaml file. If we
needed to make changes to the collector config in that case, we’d simply edit this file,
save the changes, and then restart the collector.
In K8s, things work a bit differently. Instead of modifying the agent_config.yaml directly, we’ll
instead customize the collector configuration by making changes to the values.yaml file used to deploy
the helm chart.
The values.yaml file in GitHub
describes the customization options that are available to us.
Let’s look at an example.
Add Infrastructure Events Monitoring
For our first example, let’s enable infrastructure events monitoring for our K8s cluster.
This will allow us to see Kubernetes events as part of the Events Feed section in charts.
The cluster receiver will be configured with a Smart Agent receiver using the kubernetes-events
monitor to send custom events. See Collect Kubernetes events
for further details.
This is done by adding the following line to the values.yaml file:
Hint: steps to open and save in vi are in previous steps.
Release "splunk-otel-collector" has been upgraded. Happy Helming!
NAME: splunk-otel-collector
LAST DEPLOYED: Fri Dec 20 01:17:03 2024NAMESPACE: default
STATUS: deployed
REVISION: 2TEST SUITE: None
NOTES:
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
We can then view the config map and ensure the changes were applied:
kubectl describe cm splunk-otel-collector-otel-k8s-cluster-receiver
Ensure smartagent/kubernetes-events is included in the agent config now:
smartagent/kubernetes-events:
alwaysClusterReporter: true type: kubernetes-events
whitelistedEvents:
- involvedObjectKind: Pod
reason: Created
- involvedObjectKind: Pod
reason: Unhealthy
- involvedObjectKind: Pod
reason: Failed
- involvedObjectKind: Job
reason: FailedCreate
Note that we specified the cluster receiver config map since that’s
where these particular changes get applied.
Add the Debug Exporter
Suppose we want to see the traces and logs that are sent to the collector, so we can
inspect them before sending them to Splunk. We can use the debug exporter for this purpose, which
can be helpful for troubleshooting OpenTelemetry-related issues.
Let’s add the debug exporter to the bottom of the values.yaml file as follows:
Release "splunk-otel-collector" has been upgraded. Happy Helming!
NAME: splunk-otel-collector
LAST DEPLOYED: Fri Dec 20 01:32:03 2024NAMESPACE: default
STATUS: deployed
REVISION: 3TEST SUITE: None
NOTES:
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
Exercise the application a few times using curl, then tail the agent collector logs with the
following command:
kubectl logs -l component=otel-collector-agent -f
You should see traces written to the agent collector logs such as the following:
If you return to Splunk Observability Cloud though, you’ll notice that traces and logs are
no longer being sent there by the application.
Why do you think that is? We’ll explore it in the next section.
Troubleshoot OpenTelemetry Collector Issues
20 minutes
In the previous section, we added the debug exporter to the collector configuration,
and made it part of the pipeline for traces and logs. We see the debug output
written to the agent collector logs as expected.
However, traces are no longer sent to o11y cloud. Let’s figure out why and fix it.
Review the Collector Config
Whenever a change to the collector config is made via a values.yaml file, it’s helpful
to review the actual configuration applied to the collector by looking at the config map:
kubectl describe cm splunk-otel-collector-otel-agent
Let’s review the pipelines for logs and traces in the agent collector config. They should look
like this:
Do you see the problem? Only the debug exporter is included in the traces and logs pipelines.
The otlphttp and signalfx exporters that were present in the traces pipeline configuration previously are gone.
This is why we no longer see traces in o11y cloud. And for the logs pipeline, the splunk_hec/platform_logs
exporter has been removed.
How did we know what specific exporters were included before? To find out,
we could have reverted our earlier customizations and then checked the config
map to see what was in the traces pipeline originally. Alternatively, we can refer
to the examples in the GitHub repo for splunk-otel-collector-chart
which shows us what default agent config is used by the Helm chart.
How did these exporters get removed?
Let’s review the customizations we added to the values.yaml file:
When we applied the values.yaml file to the collector using helm upgrade, the
custom configuration got merged with the previous collector configuration.
When this happens, the sections of the yaml configuration that contain lists,
such as the list of exporters in the pipeline section, get replaced with what we
included in the values.yaml file (which was only the debug exporter).
Let’s Fix the Issue
So when customizing an existing pipeline, we need to fully redefine that part of the configuration.
Our values.yaml file should thus be updated as follows:
The Splunk Distribution of OpenTelemetry .NET automatically exports logs enriched with tracing context
from applications that use Microsoft.Extensions.Logging for logging (which our sample app does).
Application logs are enriched with tracing metadata and then exported to a local instance of
the OpenTelemetry Collector in OTLP format.
Let’s take a closer look at the logs that were captured by the debug exporter to see if that’s happening. To tail the collector logs, we can use the following command:
kubectl logs -l component=otel-collector-agent -f
Once we’re tailing the logs, we can use curl to generate some more traffic. Then we should see
something like the following:
In this example, we can see that the Trace ID and Span ID were automatically written to the log output
by the OpenTelemetry .NET instrumentation. This allows us to correlate logs with traces in
Splunk Observability Cloud.
You might remember though that if we deploy the OpenTelemetry collector in a K8s cluster using Helm,
and we include the log collection option, then the OpenTelemetry collector will use the File Log receiver
to automatically capture any container logs.
This would result in duplicate logs being captured for our application. For example, in the following screenshot we
can see two log entries for each request made to our service:
How do we avoid this?
Avoiding Duplicate Logs in K8s
To avoid capturing duplicate logs, we can set the OTEL_LOGS_EXPORTER environment variable to none,
to tell the Splunk Distribution of OpenTelemetry .NET to avoid exporting logs to the collector using OTLP.
We can do this by adding the OTEL_LOGS_EXPORTER environment variabl to the deployment.yaml file:
# update the deploymentkubectl apply -f deployment.yaml
Setting the OTEL_LOGS_EXPORTER environment variable to none is straightforward. However, the Trace ID
and Span ID are not written to the stdout logs generated by the application,
which would prevent us from correlating logs with traces.
To resolve this, we will need to define a custom logger, such as the example defined in /home/splunk/workshop/docker-k8s-otel/helloworld/SplunkTelemetryConfigurator.cs.
We could include this in our application by updating the Program.cs file as follows:
Then we’ll build a new Docker image that includes the custom logging configuration:
cd /home/splunk/workshop/docker-k8s-otel/helloworld
docker build -t helloworld:1.3 .
And then we’ll import the updated image into our local container repository:
cd /home/splunk
# Tag the image docker tag helloworld:1.3 localhost:9999/helloworld:1.3
# Import the image into our local container repodocker push localhost:9999/helloworld:1.3
Finally, we’ll need to update the `deployment.yaml’ file to use the 1.3 version
of the container image:
# update the deploymentkubectl apply -f deployment.yaml
Now we can see that the duplicate log entries have been eliminated. And the
remaining log entries have been formatted as JSON, and include the span and trace IDs:
Summary
2 minutes
This workshop provided hands-on experience with the following concepts:
How to deploy the Splunk Distribution of the OpenTelemetry Collector on a Linux host.
How to instrument a .NET application with the Splunk Distribution of OpenTelemetry .NET.
How to “dockerize” a .NET application and instrument it with the Splunk Distribution of OpenTelemetry .NET.
How to deploy the Splunk Distribution of the OpenTelemetry Collector in a Kubernetes cluster using Helm.
How to customize the collector configuration and troubleshoot an issue.
To run this workshop on your own in the future, refer back to these instructions and use the Splunk4Rookies - Observability
workshop template in Splunk Show to provision an EC2 instance.