Deploy the Sample Application and Instrument with OpenTelemetry
15 minutesAt this point, we’ve deployed an OpenTelemetry collector in our K8s cluster, and it’s successfully collecting infrastructure metrics.
The next step is to deploy a sample application and instrument with OpenTelemetry to capture traces.
We’ll use a microservices-based application written in Python. To keep the workshop simple, we’ll focus on two services: a credit check service and a credit processor service.
Deploy the Application
To save time, we’ve built Docker images for both of these services already which are available in Docker Hub. We can deploy the credit check service in our K8s cluster with the following command:
Next, let’s deploy the credit processor service:
Finally, let’s deploy a load generator to generate traffic:
Explore the Application
We’ll provide an overview of the application in this section. If you’d like to see the complete source code for the application, refer to the Observability Workshop repository in GitHub
OpenTelemetry Instrumentation
If we look at the Dockerfile’s used to build the credit check and credit processor services, we
can see that they’ve already been instrumented with OpenTelemetry. For example, let’s look at
/home/splunk/workshop/tagging/creditcheckservice-py-with-tags/Dockerfile:
We can see that splunk-py-trace-bootstrap was included, which installs OpenTelemetry instrumentation
for supported packages used by our applications. We can also see that splunk-py-trace is used as part
of the command to start the application.
And if we review the /home/splunk/workshop/tagging/creditcheckservice-py-with-tags/requirements.txt file,
we can see that splunk-opentelemetry[all] was included in the list of packages.
Finally, if we review the Kubernetes manifest that we used to deploy this service (/home/splunk/workshop/tagging/creditcheckservice-py-with-tags/creditcheckservice-dockerhub.yaml),
we can see that environment variables were set in the container to tell OpenTelemetry
where to export OTLP data to:
This is all that’s needed to instrument the service with OpenTelemetry!
Explore the Application
We’ve captured several custom tags with our application, which we’ll explore shortly. Before we do that, let’s introduce the concept of tags and why they’re important.
What are tags?
Tags are key-value pairs that provide additional metadata about spans in a trace, allowing you to enrich the context of the spans you send to Splunk APM.
For example, a payment processing application would find it helpful to track:
- The payment type used (i.e. credit card, gift card, etc.)
- The ID of the customer that requested the payment
This way, if errors or performance issues occur while processing the payment, we have the context we need for troubleshooting.
While some tags can be added with the OpenTelemetry collector, the ones we’ll be working with in this workshop are more granular, and are added by application developers using the OpenTelemetry SDK.
Why are tags so important?
Tags are essential for an application to be truly observable. They add the context to the traces to help us understand why some users get a great experience and others don’t. And powerful features in Splunk Observability Cloud utilize tags to help you jump quickly to root cause.
A note about terminology before we proceed. While we discuss tags in this workshop, and this is the terminology we use in Splunk Observability Cloud, OpenTelemetry uses the term attributes instead. So when you see tags mentioned throughout this workshop, you can treat them as synonymous with attributes.
How are tags captured?
To capture tags in a Python application, we start by importing the trace module by adding an
import statement to the top of the /home/splunk/workshop/tagging/creditcheckservice-py-with-tags/main.py file:
Next, we need to get a reference to the current span so we can add an attribute (aka tag) to it:
That was pretty easy, right? We’ve captured a total of four tags in the credit check service, with the final result looking like this:
Review Trace Data
Before looking at the trace data in Splunk Observability Cloud, let’s review what the debug exporter has captured by tailing the agent collector logs with the following command:
Hint: use CTRL+C to stop tailing the logs.
You should see traces written to the agent collector logs such as the following:
Notice how the trace includes the tags (aka attributes) that we captured in the code, such as
credit.score and credit.score.category. We’ll use these in the next section, when
we analyze the traces in Splunk Observability Cloud to find the root cause of a performance issue.