Scenarios

  • Optimize Cloud Monitoring

    This scenario is for ITOps teams managing a hybrid infrastructure that need to troubleshoot cloud-native performance issues, by correlating real-time metrics with logs to troubleshoot faster, improve MTTD/MTTR, and optimize costs.

  • Debug Problems in Microservices

    This scenario helps engineering teams identify and fix issues caused by planned and unplanned changes to their microservices-based applications.

  • Optimize End User Experiences

    Use Splunk Real User Monitoring (RUM) and Synthetics to get insight into end user experience, and proactively test scenarios to improve that experience.

  • Self-Service Observability

    This scenario helps platform engineering (or central tools) teams enable engineers with self-service observability tooling at scale, so developers and SREs can spend less time managing their toolchain and more time building and delivering cool software.

Last Modified Sep 19, 2024

Subsections of Scenarios

Optimize Cloud Monitoring

3 minutes   Author Tim Hard

The elasticity of cloud architectures means that monitoring artifacts must scale elastically as well, breaking the paradigm of purpose-built monitoring assets. As a result, administrative overhead, visibility gaps, and tech debt skyrocket while MTTR slows. This typically happens for three reasons:

  • Complex and Inefficient Data Management: Infrastructure data is scattered across multiple tools with inconsistent naming conventions, leading to fragmented views and poor metadata and labelling. Managing multiple agents and data flows adds to the complexity.
  • Inadequate Monitoring and Troubleshooting Experience: Slow data visualization and troubleshooting, cluttered with bespoke dashboards and manual correlations, are hindered further by the lack of monitoring tools for ephemeral technologies like Kubernetes and serverless functions.
  • Operational and Scaling Challenges: Manual onboarding, user management, and chargeback processes, along with the need for extensive data summarization, slow down operations and inflate administrative tasks, complicating data collection and scalability.

To address these challenges you need a way to:

  • Standardize Data Collection and Tags: Centralized monitoring with a single, open-source agent to apply uniform naming standards and ensure metadata for visibility. Optimize data collection and use a monitoring-as-code approach for consistent collection and tagging.
  • Reuse Content Across Teams: Streamline new IT infrastructure onboarding and user management with templates and automation. Utilize out-of-the-box dashboards, alerts, and self-service tools to enable content reuse, ensuring uniform monitoring and reducing manual effort.
  • Improve Timeliness of Alerts: Utilize highly performant open source data collection, combined with real-time streaming-based data analytics and alerting, to enhance the timeliness of notifications. Automatically configured alerts for common problem patterns (AutoDetect) and minimal yet effective monitoring dashboards and alerts will ensure rapid response to emerging issues, minimizing potential disruptions.
  • Correlate Infrastructure Metrics and Logs: Achieve full monitoring coverage of all IT infrastructure by enabling seamless correlation between infrastructure metrics and logs. High-performing data visualization and a single source of truth for data, dashboards, and alerts will simplify the correlation process, allowing for more effective troubleshooting and analysis of the IT environment.

In this workshop, we’ll explore:

  • How to standardize data collection and tags using OpenTelemetry.
  • How to reuse content across teams.
  • How to improve timelines of alerts.
  • How to correlate infrastructure metrics and logs.
Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Last Modified Apr 3, 2024

Subsections of Optimize Cloud Monitoring

Getting Started

3 minutes   Author Tim Hard

During this technical Optimize Cloud Monitoring Workshop, you will build out an environment based on a lightweight Kubernetes1 cluster.

To simplify the workshop modules, a pre-configured AWS/EC2 instance is provided.

The instance is pre-configured with all the software required to deploy the Splunk OpenTelemetry Connector2 and the microservices-based OpenTelemetry Demo Application3 in Kubernetes which has been instrumented using OpenTelemetry to send metrics, traces, spans and logs.

This workshop will introduce you to the benefits of standardized data collection, how content can be re-used across teams, correlating metrics and logs, and creating detectors to fire alerts. By the end of these technical workshops, you will have a good understanding of some of the key features and capabilities of the Splunk Observability Cloud.

Here are the instructions on how to access your pre-configured AWS/EC2 instance

Splunk Architecture Splunk Architecture


  1. Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. ↩︎

  2. OpenTelemetry Collector offers a vendor-agnostic implementation on how to receive, process and export telemetry data. In addition, it removes the need to run, operate and maintain multiple agents/collectors to support open-source telemetry data formats (e.g. Jaeger, Prometheus, etc.) sending to multiple open-source or commercial back-ends. ↩︎

  3. The OpenTelemetry Demo Application is a microservice-based distributed system intended to illustrate the implementation of OpenTelemetry in a near real-world environment. ↩︎

Last Modified Apr 2, 2024

Subsections of 1. Getting Started

How to connect to your workshop environment

5 minutes   Author Tim Hard
  1. How to retrieve the IP address of the AWS/EC2 instance assigned to you.
  2. Connect to your instance using SSH, Putty1 or your web browser.
  3. Verify your connection to your AWS/EC2 cloud instance.
  4. Using Putty (Optional)
  5. Using Multipass (Optional)

1. AWS/EC2 IP Address

In preparation for the workshop, Splunk has prepared an Ubuntu Linux instance in AWS/EC2.

To get access to the instance that you will be using in the workshop please visit the URL to access the Google Sheet provided by the workshop leader.

Search for your AWS/EC2 instance by looking for your first and last name, as provided during registration for this workshop.

attendee spreadsheet attendee spreadsheet

Find your allocated IP address, SSH command (for Mac OS, Linux and the latest Windows versions) and password to enable you to connect to your workshop instance.

It also has the Browser Access URL that you can use in case you cannot connect via SSH or Putty - see EC2 access via Web browser

Important

Please use SSH or Putty to gain access to your EC2 instance if possible and make a note of the IP address as you will need this during the workshop.

2. SSH (Mac OS/Linux)

Most attendees will be able to connect to the workshop by using SSH from their Mac or Linux device, or on Windows 10 and above.

To use SSH, open a terminal on your system and type ssh splunk@x.x.x.x (replacing x.x.x.x with the IP address found in Step #1).

ssh login ssh login

When prompted Are you sure you want to continue connecting (yes/no/[fingerprint])? please type yes.

ssh password ssh password

Enter the password provided in the Google Sheet from Step #1.

Upon successful login, you will be presented with the Splunk logo and the Linux prompt.

ssh connected ssh connected

3. SSH (Windows 10 and above)

The procedure described above is the same on Windows 10, and the commands can be executed either in the Windows Command Prompt or PowerShell. However, Windows regards its SSH Client as an “optional feature”, which might need to be enabled.

You can verify if SSH is enabled by simply executing ssh

If you are shown a help text on how to use the SSH command (like shown in the screenshot below), you are all set.

Windows SSH enabled Windows SSH enabled

If the result of executing the command looks something like the screenshot below, you want to enable the “OpenSSH Client” feature manually.

Windows SSH disabled Windows SSH disabled

To do that, open the “Settings” menu, and click on “Apps”. While in the “Apps & features” section, click on “Optional features”.

Windows Apps Settings Windows Apps Settings

Here, you are presented with a list of installed features. On the top, you see a button with a plus icon to “Add a feature”. Click it. In the search input field, type “OpenSSH”, and find a feature called “OpenSSH Client”, or respectively, “OpenSSH Client (Beta)”, click on it, and click the “Install”-button.

Windows Enable OpenSSH Client Windows Enable OpenSSH Client

Now you are set! In case you are not able to access the provided instance despite enabling the OpenSSH feature, please do not shy away from reaching out to the course instructor, either via chat or directly.

At this point you are ready to continue and start the workshop


4. Putty (For Windows Versions prior to Windows 10)

If you do not have SSH pre-installed or if you are on a Windows system, the best option is to install Putty which you can find here.

Important

If you cannot install Putty, please go to Web Browser (All).

Open Putty and enter in the Host Name (or IP address) field the IP address provided in the Google Sheet.

You can optionally save your settings by providing a name and pressing Save.

putty-2 putty-2

To then login to your instance click on the Open button as shown above.

If this is the first time connecting to your AWS/EC2 workshop instance, you will be presented with a security dialogue, please click Yes.

putty-3 putty-3

Once connected, login in as splunk and the password is the one provided in the Google Sheet.

Once you are connected successfully you should see a screen similar to the one below:

putty-4 putty-4

At this point, you are ready to continue and start the workshop


5. Web Browser (All)

If you are blocked from using SSH (Port 22) or unable to install Putty you may be able to connect to the workshop instance by using a web browser.

Note

This assumes that access to port 6501 is not restricted by your company’s firewall.

Open your web browser and type http://x.x.x.x:6501 (where X.X.X.X is the IP address from the Google Sheet).

http-6501 http-6501

Once connected, login in as splunk and the password is the one provided in the Google Sheet.

http-connect http-connect

Once you are connected successfully you should see a screen similar to the one below:

web login web login

Unlike when you are using regular SSH, copy and paste does require a few extra steps to complete when using a browser session. This is due to cross browser restrictions.

When the workshop asks you to copy instructions into your terminal, please do the following:

Copy the instruction as normal, but when ready to paste it in the web terminal, choose Paste from the browser as show below:

web paste 1 web paste 1

This will open a dialogue box asking for the text to be pasted into the web terminal:

web paste 3 web paste 3

Paste the text in the text box as shown, then press OK to complete the copy and paste process.

Note

Unlike regular SSH connection, the web browser has a 60-second time out, and you will be disconnected, and a Connect button will be shown in the center of the web terminal.

Simply click the Connect button and you will be reconnected and will be able to continue.

web reconnect web reconnect

At this point, you are ready to continue and start the workshop.


6. Multipass (All)

If you are unable to access AWS but want to install software locally, follow the instructions for using Multipass.

Last Modified Apr 4, 2024

Deploy OpenTelemetry Demo Applciation

10 minutes   Author Tim Hard

Introduction

For this workshop, we’ll be using the OpenTelemetry Demo Application running in Kubernetes. This application is for an online retailer and includes more than a dozen services written in many different languages. While metrics, traces, and logs are being collected from this application, this workshop is primarily focused on how Splunk Observability Cloud can be used to more efficiently monitor infrastructure.

Pre-requisites

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance.

cd ~/workshop/optimize-cloud-monitoring && \
./deploy-application.sh

You’ll be asked to enter your favorite city. This will be used in the OpenTelemetry Collector configuration as a custom tag to show how easy it is to add additional context to your observability data.

Note

Custom tagging will be covered in more detail in the Standardize Data Collection section of this workshop.

Enter Favorite City Enter Favorite City

Your application should now be running and sending data to Splunk Observability Cloud. You’ll dig into the data in the next section.

Last Modified Apr 4, 2024

Standardize Data Collection

2 minutes   Author Tim Hard

Why Standards Matter

As cloud adoption grows, we often face requests to support new technologies within a diverse landscape, posing challenges in delivering timely content. Take, for instance, a team containerizing five workloads on AWS requiring EKS visibility. Usually, this involves assisting with integration setup, configuring metadata, and creating dashboards and alerts—a process that’s both time-consuming and increases administrative overhead and technical debt.

Splunk Observability Cloud was designed to handle customers with a diverse set of technical requirements and stacks – from monolithic to microservices architectures, from homegrown applications to Software-as-a-Service.

Splunk offers a native experience for OpenTelemetry, which means OTel is the preferred way to get data into Splunk. Between Splunk’s integrations and the OpenTelemetry community, there are a number of integrations available to easily collect from diverse infrastructure and applications. This includes both on-prem systems like VMWare and as well as guided integrations with cloud vendors, centralizing these hybrid environments.

Splunk Observability Cloud Integrations Splunk Observability Cloud Integrations

For someone like a Splunk admin, the OpenTelemetry Collector can additionally be deployed to a Splunk Universal Forwarder as a Technical Add-on. This enables fast roll-out and centralized configuration management using the Splunk Deployment Server. Let’s assume that the same team adopting Kubernetes is going to deploy a cluster for each one of our B2B customers. I’ll show you how to make a simple modification to the OpenTelemetry collector to add the customerID, and then use mirrored dashboards to allow any of our SRE teams to easily see the customer they care about.

Last Modified Apr 4, 2024

Subsections of 2. Standardize Data Collection

What Are Tags?

3 minutes   Author Tim Hard

Tags are key-value pairs that provide additional metadata about metrics, spans in a trace, or logs allowing you to enrich the context of the data you send to Splunk Observability Cloud. Many tags are collected by default such as hostname or OS type. Custom tags can be used to provide environment or application-specific context. Examples of custom tags include:

Infrastructure specific attributes
  • What data center a host is in
  • What services are hosted on an instance
  • What team is responsible for a set of hosts
Application specific attributes
  • What Application Version is running
  • Feature flags or experimental identifiers
  • Tenant ID in multi-tenant applications
  • User ID
  • User role (e.g. admin, guest, subscriber)
  • User geographical location (e.g. country, city, region)
There are two ways to add tags to your data
  • Add tags as OpenTelemetry attributes to metrics, traces, and logs when you send data to the Splunk Distribution of OpenTelemetry Collector. This option lets you add spans in bulk.
  • Instrument your application to create span tags. This option gives you the most flexibility at the per-application level.

Why are tags so important?

Tags are essential for an application to be truly observable. Tags add context to the traces to help us understand why some users get a great experience and others don’t. Powerful features in Splunk Observability Cloud utilize tags to help you jump quickly to the root cause.

Contextual Information: Tags provide additional context to the metrics, traces, and logs allowing developers and operators to understand the behavior and characteristics of infrastructure and traced operations.

Filtering and Aggregation: Tags enable filtering and aggregation of collected data. By attaching tags, users can filter and aggregate data based on specific criteria. This filtering and aggregation help in identifying patterns, diagnosing issues, and gaining insights into system behavior.

Correlation and Analysis: Tags facilitate correlation between metrics and other telemetry data, such as traces and logs. By including common identifiers or contextual information as tags, users can correlate metrics, traces, and logs enabling comprehensive analysis and troubleshooting of distributed systems.

Customization and Flexibility: OpenTelemetry allows developers to define custom tags based on their application requirements. This flexibility enables developers to capture domain-specific metadata or contextual information that is crucial for understanding the behavior of their applications.

Attributes vs. Tags

A note about terminology before we proceed. While this workshop is about tags, and this is the terminology we use in Splunk Observability Cloud, OpenTelemetry uses the term attributes instead. So when you see tags mentioned throughout this workshop, you can treat them as synonymous with attributes.

Last Modified Apr 4, 2024

Adding Context With Tags

3 minutes   Author Tim Hard

When you deployed the OpenTelemetry Demo Application in the Getting Started section of this workshop, you were asked to enter your favorite city. For this workshop, we’ll be using that to show the value of custom tags.

For this workshop, the OpenTelemetry collector is pre-configured to use the city you provided as a custom tag called store.location which will be used to emulate Kubernetes Clusters running in different geographic locations. We’ll use this tag as a filter to show how you can use Out-of-the-Box integration dashboards to quickly create views for specific teams, applications, or other attributes about your environment. Efficiently enabling content to be reused across teams without increasing technical debt.

Here is the OpenTelemetry Collector configuration used to add the store.location tag to all of the data sent to this collector. This means any metrics, traces, or logs will contain the store.location tag which can then be used to search, filter, or correlate this value.

Tip

If you’re interested in a deeper dive on the OpenTelemetry Collector, head over to the Self Service Observability workshop where you can get hands-on with configuring the collector or the OpenTelemetry Collector Ninja Workshop where you’ll dissect the inner workings of each collector component.

OpenTelemetry Collector Configuration OpenTelemetry Collector Configuration

While this example uses a hard-coded value for the tag, parameterized values can also be used, allowing you to customize the tags dynamically based on the context of each host, application, or operation. This flexibility enables you to capture relevant metadata, user-specific details, or system parameters, providing a rich context for metrics, tracing, and log data while enhancing the observability of your distributed systems.

Now that you have the appropriate context, which as we’ve established is critical to Observability, let’s head over to Splunk Observability Cloud and see how we can use the data we’ve just configured.

Last Modified Apr 4, 2024

Reuse Content Across Teams

3 minutes   Author Tim Hard

In today’s rapidly evolving technological landscape, where hybrid and cloud environments are becoming the norm, the need for effective monitoring and troubleshooting solutions has never been more critical. However, managing the elasticity and complexity of these modern infrastructures poses a significant challenge for teams across various industries. One of the primary pain points encountered in this endeavor is the inadequacy of existing monitoring and troubleshooting experiences.

Traditional monitoring approaches often fall short in addressing the intricacies of hybrid and cloud environments. Teams frequently encounter slow data visualization and troubleshooting processes, compounded by the clutter of bespoke yet similar dashboards and the manual correlation of data from disparate sources. This cumbersome workflow is made worse by the absence of monitoring tools tailored to ephemeral technologies such as containers, orchestrators like Kubernetes, and serverless functions.

Infrastructure Overview in Splunk Observability Cloud Infrastructure Overview in Splunk Observability Cloud

In this section, we’ll cover how Splunk Observability Cloud provides out-of-the-box content for every integration. Not only do the out-of-the-box dashboards provide rich visibility into the infrastructure that is being monitored they can also be mirrored. This is important because it enables you to create standard dashboards for use by teams throughout your organization. This allows all teams to see any changes to the charts in the dashboard, and members of each team can set dashboard variables and filter customizations relevant to their requirements.

Last Modified Apr 4, 2024

Subsections of 3. Reuse Content Across Teams

Infrastrcuture Navigators

5 minutes   Author Tim Hard

Splunk Infrastructure Monitoring (IM) is a market-leading monitoring and observability service for hybrid cloud environments. Built on a patented streaming architecture, it provides a real-time solution for engineering teams to visualize and analyze performance across infrastructure, services, and applications in a fraction of the time and with greater accuracy than traditional solutions.

300+ Easy-to-use OOTB content: Pre-built navigators and dashboards, deliver immediate visualizations of your entire environment so that you can interact with all your data in real time.
Kubernetes navigator: Provides an instant, comprehensive out-of-the-box hierarchical view of nodes, pods, and containers. Ramp up even the most novice Kubernetes user with easy-to-understand interactive cluster maps.
AutoDetect alerts and detectors: Automatically identify the most important metrics, out-of-the-box, to create alert conditions for detectors that accurately alert from the moment telemetry data is ingested and use real-time alerting capabilities for important notifications in seconds.
Log views in dashboards: Combine log messages and real-time metrics on one page with common filters and time controls for faster in-context troubleshooting.
Metrics pipeline management: Control metrics volume at the point of ingest without re-instrumentation with a set of aggregation and data-dropping rules to store and analyze only the needed data. Reduce metrics volume and optimize observability spend.

Infrastructure Overview Infrastructure Overview

Exercise: Find your Kubernetes Cluster
  • From the Splunk Observability Cloud homepage, click the Infrastructure Infrastructure button -> Kubernetes -> K8s nodes
  • First, use the k8s filter k8s filter option to pick your cluster.
  • From the filter drop-down box, use the store.location value you entered when deploying the application.
  • You then can start typing the city you used which should also appear in the drop-down values. Select yours and make sure just the one for your workshop is highlighted with a blue tick blue tick.
  • Click the Apply Filter button to focus on our Cluster.

Kubernetes Navigator Kubernetes Navigator

  • You should now have your Kubernetes Cluster visible
  • Here we can see all of the different components of the cluster (Nodes, Pods, etc), each of which has relevant metrics associated with it. On the right side, you can also see what services are running in the cluster.

Before moving to the next section, take some time to explore the Kubernetes Navigator to see the data that is available Out of the Box.

Last Modified Apr 4, 2024

Dashboard Cloning

5 minutes   Author Tim Hard

ITOps teams responsible for monitoring fleets of infrastructure frequently find themselves manually creating dashboards to visualize and analyze metrics, traces, and log data emanating from rapidly changing cloud-native workloads hosted in Kubernetes and serverless architectures, alongside existing on-premises systems. Moreover, due to the absence of a standardized troubleshooting workflow, teams often resort to creating numerous custom dashboards, each resembling the other in structure and content. As a result, administrative overhead skyrockets and MTTR slows.

To address this, you can use the out-of-the-box dashboards available in Splunk Observability Cloud for each and every integration. These dashboards are filterable and can be used for ad hoc troubleshooting or as a templated approach to getting users the information they need without having to start from scratch. Not only do the out-of-the-box dashboards provide rich visibility into the infrastructure that is being monitored they can also be cloned.

Exercise: Create a Mirrored Dashboard
  1. In Splunk Observability Cloud, click the Global Search Search Search button. (Global Search can be used to quickly find content)
  2. Search for Pods and select K8s pods (Kubernetes) Search Search
  3. This will take you to the out-of-the-box Kubernetes Pods dashboard which we will use as a template for mirroring dashboards.
  4. In the upper right corner of the dashboard click the Dashboard actions button (3 horizontal dots) -> Click Save As… Search Search
  5. Enter a dashboard name (i.e. Kubernetes Pods Dashboard)
  6. Under Dashboard group search for your e-mail address and select it.
  7. Click Save

Note: Every Observability Cloud user who has set a password and logged in at least once, gets a user dashboard group and user dashboard. Your user dashboard group is your individual workspace within Observability Cloud.

Save Dashboard Save Dashboard

After saving, you will be taken to the newly created dashboard in the Dashboard Group for your user. This is an example of cloning an out-of-the-box dashboard which can be further customized and enables users to quickly build role, application, or environment relevant views.

Custom dashboards are meant to be used by multiple people and usually represent a curated set of charts that you want to make accessible to a broad cross-section of your organization. They are typically organized by service, team, or environment.

Last Modified Apr 4, 2024

Dashboard Mirroring

5 minutes   Author Tim Hard

Not only do the out-of-the-box dashboards provide rich visibility into the infrastructure that is being monitored they can also be mirrored. This is important because it enables you to create standard dashboards for use by teams throughout your organization. This allows all teams to see any changes to the charts in the dashboard, and members of each team can set dashboard variables and filter customizations relevant to their requirements.

Exercise: Create a Mirrored Dashboard
  1. While on the Kubernetes Pods dashboard, you created in the previous step, In the upper right corner of the dashboard click the Dashboard actions button (3 horizontal dots) -> Click Add a mirror…. A configuration modal for the Dashboard Mirror will open.

    Mirror Dashboard Menu Mirror Dashboard Menu

  2. Under My dashboard group search for your e-mail address and select it.

  3. (Optional) Modify the dashboard in Dashboard name override name.

  4. (Optional) Add a dashboard description in Dashboard description override.

  5. Under Default filter overrides search for k8s.cluster.name and select the name of your Kubernetes cluster.

  6. Under Default filter overrides search for store.location and select the city you entered during the workshop setup. Mirror Dashboard Config Mirror Dashboard Config

  7. Click Save

You will now be taken to the newly created dashboard which is a mirror of the Kubernetes Pods dashboard you created in the previous section. Any changes to the original dashboard will be reflected in this dashboard as well. This allows teams to have a consistent yet specific view of the systems they care about and any modifications or updates can be applied in a single location, significantly minimizing the effort needed when compared to updating each individual dashboard.

In the next section, you’ll add a new logs-based chart to the original dashboard and see how the dashboard mirror is automatically updated as well.

Last Modified Apr 4, 2024

Correlate Metrics and Logs

1 minute   Author Tim Hard

Correlating infrastructure metrics and logs is often a challenging task, primarily due to inconsistencies in naming conventions across various data sources, including hosts operating on different systems. However, leveraging the capabilities of OpenTelemetry can significantly simplify this process. With OpenTelemetry’s robust framework, which offers rich metadata and attribution, metrics, traces, and logs can seamlessly correlate using standardized field names. This automated correlation not only alleviates the burden of manual effort but also enhances the overall observability of the system.

By aligning metrics and logs based on common field names, teams gain deeper insights into system performance, enabling more efficient troubleshooting, proactive monitoring, and optimization of resources. In this workshop section, we’ll explore the importance of correlating metrics with logs and demonstrate how Splunk Observability Cloud empowers teams to unlock additional value from their observability data.

Log Observer Log Observer

Last Modified Apr 4, 2024

Subsections of 4. Correlate Metrics and Logs

Correlate Metrics and Logs

5 minutes   Author Tim Hard

In this section, we’ll dive into the seamless correlation of metrics and logs facilitated by the robust naming standards offered by OpenTelemetry. By harnessing the power of OpenTelemetry within Splunk Observability Cloud, we’ll demonstrate how troubleshooting issues becomes significantly more efficient for Site Reliability Engineers (SREs) and operators. With this integration, contextualizing data across various telemetry sources no longer demands manual effort to correlate information. Instead, SREs and operators gain immediate access to the pertinent context they need, allowing them to swiftly pinpoint and resolve issues, improving system reliability and performance.

Exercise: View pod logs

The Kubernetes Pods Dashboard you created in the previous section already includes a chart that contains all of the pod logs for your Kubernetes Cluster. The log entries are split by container in this stacked bar chart. To view specific log entries perform the following steps:

  1. On the Kubernetes Pods Dashboard click on one of the bar charts. A modal will open with the most recent log entries for the container you’ve selected.

    K8s pod logs K8s pod logs

  2. Click one of the log entries.

    K8s pod log event K8s pod log event

    Here we can see the entire log event with all of the fields and values. You can search for specific field names or values within the event itself using the Search for fields bar in the event.

  3. Enter the city you configured during the application deployment

    K8s pod log field search K8s pod log field search

    The event will now be filtered to the store.location field. This feature is great for exploring large log entries for specific fields and values unique to your environment or to search for keywords like Error or Failure.

  4. Close the event using the X in the upper right corner.

  5. Click the Chart actions (three horizontal dots) on the Pod log event rate chart

  6. Click View in Log Observer

View in Log Observer View in Log Observer

This will take us to Log Observer. In the next section, you’ll create a chart based on log events and add it to the K8s Pod Dashboard you cloned in section 3.2 Dashboard Cloning. You’ll also see how this new chart is automatically added to the mirrored dashboard you created in section 3.3 Dashboard Mirroring.

Last Modified Nov 8, 2024

Create Log-based Chart

5 minutes   Author Tim Hard

In Log Observer, you can perform codeless queries on logs to detect the source of problems in your systems. You can also extract fields from logs to set up log processing rules and transform your data as it arrives or send data to Infinite Logging S3 buckets for future use. See What can I do with Log Observer? to learn more about Log Observer capabilities.

In this section, you’ll create a chart filtered to logs that include errors which will be added to the K8s Pod Dashboard you cloned in section 3.2 Dashboard Cloning.

Exercise: Create Log-based Chart

Because you drilled into Log Observer from the K8s Pod Dashboard in the previous section, the dashboard will already be filtered to your cluster and store location using the k8s.cluster.name and store.location fields and the bar chart is split by k8s.pod.name. To filter the dashboard to only logs that contain errors complete the following steps:

Log Observer can be filtered using Keywords or specific key-value pairs.

  1. In Log Observer click Add Filter along the top.

  2. Make sure you’ve selected Fields as the filter type and enter severity in the Find a field… search bar.

  3. Select severity from the fields list.

    You should now see a list of severities and the number of log entries for each.

  4. Under Top values, hover over Error and click the = button to apply the filter.

    Log Observer: Filter errors Log Observer: Filter errors

    The dashboard will now be filtered to only log entries with a severity of Error and the bar chart will be split by the Kubernetes Pod that contains the errors. Next, you’ll save the chart on your Kubernetes Pods Dashboard.

  5. In the upper right corner of the Log Observer dashboard click Save.

  6. Select Save to Dashboard.

  7. In the Chart name field enter a name for your chart.

  8. (Optional) In the Chart description field enter a description for your chart.

    Log Observer: Save Chart Name Log Observer: Save Chart Name

  9. Click Select Dashboard and search for the name of the Dashboard you cloned in section 3.2 Dashboard Cloning.

  10. Select the dashboard in the Dashboard Group for your email address.

    Log Observer: Select Dashboard Log Observer: Select Dashboard

  11. Click OK

  12. For the Chart type select Log timeline

  13. Click Save and go to the dashboard

You will now be taken to your Kubernetes Pods Dashboard where you should see the chart you just created for pod errors.

Log Errors Chart Log Errors Chart

Because you updated the original Kubernetes Pods Dashboard, your mirrored dashboard will also include this chart as well! You can see this by clicking the mirrored version of your dashboard along the top of the Dashboard Group for your user.

Log Errors Chart Log Errors Chart

Now that you’ve seen how data can be reused across teams by cloning the dashboard, creating dashboard mirrors and how metrics can easily be correlated with logs, let’s take a look at how to create alerts so your teams can be notified when there is an issue with their infrastructure, services, or applications.

Last Modified Apr 4, 2024

Improve Timeliness of Alerts

1 minutes   Author Tim Hard

When monitoring hybrid and cloud environments, ensuring timely alerts for critical infrastructure and applications poses a significant challenge. Typically, this involves crafting intricate queries, meticulously scheduling searches, and managing alerts across various monitoring solutions. Moreover, the proliferation of disparate alerts generated from identical data sources often results in unnecessary duplication, contributing to alert fatigue and noise within the monitoring ecosystem.

In this section, we’ll explore how Splunk Observability Cloud addresses these challenges by enabling the effortless creation of alert criteria. Leveraging its 10-second default data collection capability, alerts can be triggered swiftly, surpassing the timeliness achieved by traditional monitoring tools. This enhanced responsiveness not only reduces Mean Time to Detect (MTTD) but also accelerates Mean Time to Resolve (MTTR), ensuring that critical issues are promptly identified and remediated.

Detector Dashboard Detector Dashboard

Last Modified Apr 2, 2024

Subsections of 5. Improve Timeliness of Alerts

Create Custom Detector

10 minutes   Author Tim Hard

Splunk Observability Cloud provides detectors, events, alerts, and notifications to keep you informed when certain criteria are met. There are a number of pre-built AutoDetect Detectors that automatically surface when common problem patterns occur, such as when an EC2 instance’s CPU utilization is expected to reach its limit. Additionally, you can also create custom detectors if you want something more optimized or specific. For example, you want a message sent to a Slack channel or to an email address for the Ops team that manages this Kubernetes cluster when Memory Utilization on their pods has reached 85%.

Exercise: Create Custom Detector

In this section you’ll create a detector on Pod Memory Utilization which will trigger if utilization surpasses 85%

  1. On the Kubernetes Pods Dashboard you cloned in section 3.2 Dashboard Cloning, click the Get Alerts button (bell icon) for the Memory usage (%) chart -> Click New detector from chart.

    New Detector from Chart New Detector from Chart

  2. In the Create detector add your initials to the detector name.

    Create Detector: Update Detector Name Create Detector: Update Detector Name

  3. Click Create alert rule.

    These conditions are expressed as one or more rules that trigger an alert when the conditions in the rules are met. Importantly, multiple rules can be included in the same detector configuration which minimizes the total number of alerts that need to be created and maintained. You can see which signal this detector will alert on by the bell icon in the Alert On column. In this case, this detector will alert on the Memory Utilization for the pods running in this Kubernetes cluster.

    Alert Signal Alert Signal

  4. Click Proceed To Alert Conditions.

    Many pre-built alert conditions can be applied to the metric you want to alert on. This could be as simple as a static threshold or something more complex, for example, is memory usage deviating from the historical baseline across any of your 50,000 containers?

    Alert Conditions Alert Conditions

  5. Select Static Threshold.

  6. Click Proceed To Alert Settings.

    In this case, you want the alert to trigger if any pods exceed 85% memory utilization. Once you’ve set the alert condition, the configuration is back-tested against the historical data so you can confirm that the alert configuration is accurate, meaning will the alert trigger on the criteria you’ve defined? This is also a great way to confirm if the alert generates too much noise.

    Alert Settings Alert Settings

  7. Enter 85 in the Threshold field.

  8. Click Proceed To Alert Message.

    Next, you can set the severity for this alert, you can include links to runbooks and short tips on how to respond, and you can customize the message that is included in the alert details. The message can include parameterized fields from the actual data, for example, in this case, you may want to include which Kubernetes node the pod is running on, or the store.location configured when you deployed the application, to provide additional context.

    Alert Message Alert Message

  9. Click Proceed To Alert Recipients.

    You can choose where you want this alert to be sent when it triggers. This could be to a team, specific email addresses, or to other systems such as ServiceNow, Slack, Splunk On-Call or Splunk ITSI. You can also have the alert execute a webhook which enables me to leverage automation or to integrate with many other systems such as homegrown ticketing tools. For the purpose of this workshop do not include a recipient

    Alert Recipients Alert Recipients

  10. Click Proceed To Alert Activation.

    Activate Alert Activate Alert

  11. Click Activate Alert.

    Activate Alert Message Activate Alert Message

    You will receive a warning because no recipients were included in the Notification Policy for this detector. This can be warning can be dismissed.

  12. Click Save.

    Activate Alert Message Activate Alert Message

    You will be taken to your newly created detector where you can see any triggered alerts.

  13. In the upper right corner, Click Close to close the Detector.

The detector status and any triggered alerts will automatically be included in the chart because this detector was configured for this chart.

Alert Chart Alert Chart

Congratulations! You’ve successfully created a detector that will trigger if pod memory utilization exceeds 85%. After a few minutes, the detector should trigger some alerts. You can click the detector name in the chart to view the triggered alerts.

Last Modified Apr 4, 2024

Conclusion

1 minute  

Today you’ve seen how Splunk Observability Cloud can help you overcome many of the challenges you face monitoring hybrid and cloud environments. You’ve demonstrated how Splunk Observability Cloud streamlines operations with standardized data collection and tags, ensuring consistency across all IT infrastructure. The Unified Service Telemetry has been a game-changer, providing in-context metrics, logs, and trace data that make troubleshooting swift and efficient. By enabling the reuse of content across teams, you’re minimizing technical debt and bolstering the performance of our monitoring systems.

Happy Splunking!

Dancing Buttercup Dancing Buttercup

Last Modified Apr 4, 2024

Debug Problems in Microservices

  • Tagging Workshop

    This workshop shows how tags can be used to reduce the time required for SREs to isolate issues across services, so they know which team to engage to troubleshoot the issue further, and can provide context to help engineering get a head start on debugging.

  • Profiling Workshop

    This workshop shows how Database Query Performance and AlwaysOn Profiling can be used to reduce the time required for engineers to debug problems in microservices.

Last Modified Sep 6, 2024

Subsections of Debug Problems in Microservices

Tagging Workshop

2 minutes   Author Derek Mitchell

Splunk Observability Cloud includes powerful features that dramatically reduce the time required for SREs to isolate issues across services, so they know which team to engage to troubleshoot the issue further, and can provide context to help engineering get a head start on debugging.

Unlocking these features requires tags to be included with your application traces. But how do you know which tags are the most valuable and how do you capture them?

In this workshop, we’ll explore:

  • What are tags and why are they such a critical part of making an application observable.
  • How to use OpenTelemetry to capture tags of interest from your application.
  • How to index tags in Splunk Observability Cloud and the differences between Troubleshooting MetricSets and Monitoring MetricSets.
  • How to utilize tags in Splunk Observability Cloud to find “unknown unknowns” using the Tag Spotlight and Dynamic Service Map features.
  • How to utilize tags for alerting and dashboards.

The workshop uses a simple microservices-based application to illustrate these concepts. Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Last Modified Sep 6, 2024

Subsections of Tagging Workshop

Build the Sample Application

10 minutes  

Introduction

For this workshop, we’ll be using a microservices-based application. This application is for an online retailer and normally includes more than a dozen services. However, to keep the workshop simple, we’ll be focusing on two services used by the retailer as part of their payment processing workflow: the credit check service and the credit processor service.

Pre-requisites

You will start with an EC2 instance and perform some initial steps in order to get to the following state:

  • Deploy the Splunk distribution of the OpenTelemetry Collector
  • Build and deploy creditcheckservice and creditprocessorservice
  • Deploy a load generator to send traffic to the services

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance:

cd workshop/tagging
./0-deploy-collector-with-services.sh
Java

There are implementations in multiple languages available for creditcheckservice. Run

./0-deploy-collector-with-services.sh java

to pick Java over Python.

View your application in Splunk Observability Cloud

Now that the setup is complete, let’s confirm that it’s sending data to Splunk Observability Cloud. Note that when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. tagging-workshop-instancename).

If everything was deployed correctly, you should see creditprocessorservice and creditcheckservice displayed in the list of services:

APM Overview APM Overview

Click on Explore on the right-hand side to view the service map. We can see that the creditcheckservice makes calls to the creditprocessorservice, with an average response time of at least 3 seconds:

Service Map Service Map

Next, click on Traces on the right-hand side to see the traces captured for this application. You’ll see that some traces run relatively fast (i.e. just a few milliseconds), whereas others take a few seconds.

Traces Traces

If you toggle Errors only to on, you’ll also notice that some traces have errors:

Traces Traces

Toggle Errors only back to off and sort the traces by duration, then click on one of the longer running traces. In this example, the trace took five seconds, and we can see that most of the time was spent calling the /runCreditCheck operation, which is part of the creditprocessorservice.

Long Running Trace Long Running Trace

Currently, we don’t have enough details in our traces to understand why some requests finish in a few milliseconds, and others take several seconds. To provide the best possible customer experience, this will be critical for us to understand.

We also don’t have enough information to understand why some requests result in errors, and others don’t. For example, if we look at one of the error traces, we can see that the error occurs when the creditprocessorservice attempts to call another service named otherservice. But why do some requests results in a call to otherservice, and others don’t?

Trace with Errors Trace with Errors

We’ll explore these questions and more in the workshop.

Last Modified Sep 6, 2024

What are Tags?

3 minutes  

To understand why some requests have errors or slow performance, we’ll need to add context to our traces. We’ll do this by adding tags. But first, let’s take a moment to discuss what tags are, and why they’re so important for observability.

What are tags?

Tags are key-value pairs that provide additional metadata about spans in a trace, allowing you to enrich the context of the spans you send to Splunk APM.

For example, a payment processing application would find it helpful to track:

  • The payment type used (i.e. credit card, gift card, etc.)
  • The ID of the customer that requested the payment

This way, if errors or performance issues occur while processing the payment, we have the context we need for troubleshooting.

While some tags can be added with the OpenTelemetry collector, the ones we’ll be working with in this workshop are more granular, and are added by application developers using the OpenTelemetry API.

Attributes vs. Tags

A note about terminology before we proceed. While this workshop is about tags, and this is the terminology we use in Splunk Observability Cloud, OpenTelemetry uses the term attributes instead. So when you see tags mentioned throughout this workshop, you can treat them as synonymous with attributes.

Why are tags so important?

Tags are essential for an application to be truly observable. As we saw with our credit check service, some users are having a great experience: fast with no errors. But other users get a slow experience or encounter errors.

Tags add the context to the traces to help us understand why some users get a great experience and others don’t. And powerful features in Splunk Observability Cloud utilize tags to help you jump quickly to root cause.

Sneak Peak: Tag Spotlight

Tag Spotlight uses tags to discover trends that contribute to high latency or error rates:

Tag Spotlight Preview Tag Spotlight Preview

The screenshot above provides an example of Tag Spotlight from another application.

Splunk has analyzed all of the tags included as part of traces that involve the payment service.

It tells us very quickly whether some tag values have more errors than others.

If we look at the version tag, we can see that version 350.10 of the service has a 100% error rate, whereas version 350.9 of the service has no errors at all:

Tag Spotlight Preview Tag Spotlight Preview

We’ll be using Tag Spotlight with the credit check service later on in the workshop, once we’ve captured some tags of our own.

Last Modified Sep 6, 2024

Capture Tags with OpenTelemetry

15 minutes  

Please proceed to one of the subsections for Java or Python. Ask your instructor for the one used during the workshop!

Last Modified Sep 6, 2024

Subsections of 3. Capture Tags with OpenTelemetry

1. Capture Tags - Java

15 minutes  

Let’s add some tags to our traces, so we can find out why some customers receive a poor experience from our application.

Identify Useful Tags

We’ll start by reviewing the code for the creditCheck function of creditcheckservice (which can be found in the file /home/splunk/workshop/tagging/creditcheckservice-java/src/main/java/com/example/creditcheckservice/CreditCheckController.java):

@GetMapping("/check")
public ResponseEntity<String> creditCheck(@RequestParam("customernum") String customerNum) {
    // Get Credit Score
    int creditScore;
    try {
        String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
        creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
    }

    String creditScoreCategory = getCreditCategoryFromScore(creditScore);

    // Run Credit Check
    String creditCheckUrl = "http://creditprocessorservice:8899/runCreditCheck?customernum=" + customerNum + "&score=" + creditScore;
    String checkResult;
    try {
        checkResult = restTemplate.getForObject(creditCheckUrl, String.class);
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error running credit check");
    }

    return ResponseEntity.ok(checkResult);
}

We can see that this function accepts a customer number as an input. This would be helpful to capture as part of a trace. What else would be helpful?

Well, the credit score returned for this customer by the creditprocessorservice may be interesting (we want to ensure we don’t capture any PII data though). It would also be helpful to capture the credit score category, and the credit check result.

Great, we’ve identified four tags to capture from this service that could help with our investigation. But how do we capture these?

Capture Tags

We start by adding OpenTelemetry imports to the top of the CreditCheckController.java file:

...
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.instrumentation.annotations.WithSpan;
import io.opentelemetry.instrumentation.annotations.SpanAttribute;

Next, we use the @WithSpan annotation to produce a span for creditCheck:

@GetMapping("/check")
@WithSpan // ADDED
public ResponseEntity<String> creditCheck(@RequestParam("customernum") String customerNum) {
    ...

We can now get a reference to the current span and add an attribute (aka tag) to it:

...
try {
    String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
    creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
} catch (HttpClientErrorException e) {
    return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
}
Span currentSpan = Span.current(); // ADDED
currentSpan.setAttribute("credit.score", creditScore); // ADDED
...

That was pretty easy, right? Let’s capture some more, with the final result looking like this:

@GetMapping("/check")
@WithSpan(kind=SpanKind.SERVER)
public ResponseEntity<String> creditCheck(@RequestParam("customernum")
                                          @SpanAttribute("customer.num")
                                          String customerNum) {
    // Get Credit Score
    int creditScore;
    try {
        String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
        creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
    }
    Span currentSpan = Span.current();
    currentSpan.setAttribute("credit.score", creditScore);

    String creditScoreCategory = getCreditCategoryFromScore(creditScore);
    currentSpan.setAttribute("credit.score.category", creditScoreCategory);

    // Run Credit Check
    String creditCheckUrl = "http://creditprocessorservice:8899/runCreditCheck?customernum=" + customerNum + "&score=" + creditScore;
    String checkResult;
    try {
        checkResult = restTemplate.getForObject(creditCheckUrl, String.class);
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error running credit check");
    }
    currentSpan.setAttribute("credit.check.result", checkResult);

    return ResponseEntity.ok(checkResult);
}

Redeploy Service

Once these changes are made, let’s run the following script to rebuild the Docker image used for creditcheckservice and redeploy it to our Kubernetes cluster:

./5-redeploy-creditcheckservice.sh java

Confirm Tag is Captured Successfully

After a few minutes, return to Splunk Observability Cloud and load one of the latest traces to confirm that the tags were captured successfully (hint: sort by the timestamp to find the latest traces):

Trace with Attributes Trace with Attributes

Well done, you’ve leveled up your OpenTelemetry game and have added context to traces using tags.

Next, we’re ready to see how you can use these tags with Splunk Observability Cloud!

Last Modified Nov 18, 2024

2. Capture Tags - Python

15 minutes  

Let’s add some tags to our traces, so we can find out why some customers receive a poor experience from our application.

Identify Useful Tags

We’ll start by reviewing the code for the credit_check function of creditcheckservice (which can be found in the /home/splunk/workshop/tagging/creditcheckservice-py/main.py file):

@app.route('/check')
def credit_check():
    customerNum = request.args.get('customernum')

    # Get Credit Score
    creditScoreReq = requests.get("http://creditprocessorservice:8899/getScore?customernum=" + customerNum)
    creditScoreReq.raise_for_status()
    creditScore = int(creditScoreReq.text)

    creditScoreCategory = getCreditCategoryFromScore(creditScore)

    # Run Credit Check
    creditCheckReq = requests.get("http://creditprocessorservice:8899/runCreditCheck?customernum=" + str(customerNum) + "&score=" + str(creditScore))
    creditCheckReq.raise_for_status()
    checkResult = str(creditCheckReq.text)

    return checkResult

We can see that this function accepts a customer number as an input. This would be helpful to capture as part of a trace. What else would be helpful?

Well, the credit score returned for this customer by the creditprocessorservice may be interesting (we want to ensure we don’t capture any PII data though). It would also be helpful to capture the credit score category, and the credit check result.

Great, we’ve identified four tags to capture from this service that could help with our investigation. But how do we capture these?

Capture Tags

We start by adding importing the trace module by adding an import statement to the top of the creditcheckservice-py/main.py file:

import requests
from flask import Flask, request
from waitress import serve
from opentelemetry import trace  # <--- ADDED BY WORKSHOP
...

Next, we need to get a reference to the current span so we can add an attribute (aka tag) to it:

def credit_check():
    current_span = trace.get_current_span()  # <--- ADDED BY WORKSHOP
    customerNum = request.args.get('customernum')
    current_span.set_attribute("customer.num", customerNum)  # <--- ADDED BY WORKSHOP
...

That was pretty easy, right? Let’s capture some more, with the final result looking like this:

def credit_check():
    current_span = trace.get_current_span()  # <--- ADDED BY WORKSHOP
    customerNum = request.args.get('customernum')
    current_span.set_attribute("customer.num", customerNum)  # <--- ADDED BY WORKSHOP

    # Get Credit Score
    creditScoreReq = requests.get("http://creditprocessorservice:8899/getScore?customernum=" + customerNum)
    creditScoreReq.raise_for_status()
    creditScore = int(creditScoreReq.text)
    current_span.set_attribute("credit.score", creditScore)  # <--- ADDED BY WORKSHOP

    creditScoreCategory = getCreditCategoryFromScore(creditScore)
    current_span.set_attribute("credit.score.category", creditScoreCategory)  # <--- ADDED BY WORKSHOP

    # Run Credit Check
    creditCheckReq = requests.get("http://creditprocessorservice:8899/runCreditCheck?customernum=" + str(customerNum) + "&score=" + str(creditScore))
    creditCheckReq.raise_for_status()
    checkResult = str(creditCheckReq.text)
    current_span.set_attribute("credit.check.result", checkResult)  # <--- ADDED BY WORKSHOP

    return checkResult

Redeploy Service

Once these changes are made, let’s run the following script to rebuild the Docker image used for creditcheckservice and redeploy it to our Kubernetes cluster:

./5-redeploy-creditcheckservice.sh

Confirm Tag is Captured Successfully

After a few minutes, return to Splunk Observability Cloud and load one of the latest traces to confirm that the tags were captured successfully (hint: sort by the timestamp to find the latest traces):

Trace with Attributes Trace with Attributes

Well done, you’ve leveled up your OpenTelemetry game and have added context to traces using tags.

Next, we’re ready to see how you can use these tags with Splunk Observability Cloud!

Last Modified Nov 18, 2024

Explore Trace Data

5 minutes  

Now that we’ve captured several tags from our application, let’s explore some of the trace data we’ve captured that include this additional context, and see if we can identify what’s causing a poor user experience in some cases.

Use Trace Analyzer

Navigate to APM, then select Traces. This takes us to the Trace Analyzer, where we can add filters to search for traces of interest. For example, we can filter on traces where the credit score starts with 7:

Credit Check Starts with Seven Credit Check Starts with Seven

If you load one of these traces, you’ll see that the credit score indeed starts with seven.

We can apply similar filters for the customer number, credit score category, and credit score result.

Explore Traces With Errors

Let’s remove the credit score filter and toggle Errors only to on, which results in a list of only those traces where an error occurred:

Traces with Errors Only Traces with Errors Only

Click on a few of these traces, and look at the tags we captured. Do you notice any patterns?

Next, toggle Errors only to off, and sort traces by duration. Look at a few of the slowest running traces, and compare them to the fastest running traces. Do you notice any patterns?

If you found a pattern that explains the slow performance and errors - great job! But keep in mind that this is a difficult way to troubleshoot, as it requires you to look through many traces and mentally keep track of what you saw, so you can identify a pattern.

Thankfully, Splunk Observability Cloud provides a more efficient way to do this, which we’ll explore next.

Last Modified Sep 6, 2024

Index Tags

5 minutes  

Index Tags

To use advanced features in Splunk Observability Cloud such as Tag Spotlight, we’ll need to first index one or more tags.

To do this, navigate to Settings -> APM MetricSets. Then click the + New MetricSet button.

Let’s index the credit.score.category tag by entering the following details (note: since everyone in the workshop is using the same organization, the instructor will do this step on your behalf):

Create Troubleshooting MetricSet Create Troubleshooting MetricSet

Click Start Analysis to proceed.

The tag will appear in the list of Pending MetricSets while analysis is performed.

Pending MetricSets Pending MetricSets

Once analysis is complete, click on the checkmark in the Actions column.

MetricSet Configuraiton Applied MetricSet Configuraiton Applied

How to choose tags for indexing

Why did we choose to index the credit.score.category tag and not the others?

To understand this, let’s review the primary use cases for tags:

  • Filtering
  • Grouping

Filtering

With the filtering use case, we can use the Trace Analyzer capability of Splunk Observability Cloud to filter on traces that match a particular tag value.

We saw an example of this earlier, when we filtered on traces where the credit score started with 7.

Or if a customer calls in to complain about slow service, we could use Trace Analyzer to locate all traces with that particular customer number.

Tags used for filtering use cases are generally high-cardinality, meaning that there could be thousands or even hundreds of thousands of unique values. In fact, Splunk Observability Cloud can handle an effectively infinite number of unique tag values! Filtering using these tags allows us to rapidly locate the traces of interest.

Note that we aren’t required to index tags to use them for filtering with Trace Analyzer.

Grouping

With the grouping use case, we can use Trace Analyzer to group traces by a particular tag.

But we can also go beyond this and surface trends for tags that we collect using the powerful Tag Spotlight feature in Splunk Observability Cloud, which we’ll see in action shortly.

Tags used for grouping use cases should be low to medium-cardinality, with hundreds of unique values.

For custom tags to be used with Tag Spotlight, they first need to be indexed.

We decided to index the credit.score.category tag because it has a few distinct values that would be useful for grouping. In contrast, the customer number and credit score tags have hundreds or thousands of unique values, and are more valuable for filtering use cases rather than grouping.

Troubleshooting vs. Monitoring MetricSets

You may have noticed that, to index this tag, we created something called a Troubleshooting MetricSet. It’s named this way because a Troubleshooting MetricSet, or TMS, allows us to troubleshoot issues with this tag using features such as Tag Spotlight.

You may have also noticed that there’s another option which we didn’t choose called a Monitoring MetricSet (or MMS). Monitoring MetricSets go beyond troubleshooting and allow us to use tags for alerting and dashboards. We’ll explore this concept later in the workshop.

Last Modified Sep 6, 2024

Use Tags for Troubleshooting

5 minutes  

Using Tag Spotlight

Now that we’ve indexed the credit.score.category tag, we can use it with Tag Spotlight to troubleshoot our application.

Navigate to APM then click on Tag Spotlight on the right-hand side. Ensure the creditcheckservice service is selected from the Service drop-down (if not already selected).

With Tag Spotlight, we can see 100% of credit score requests that result in a score of impossible have an error, yet requests for all other credit score types have no errors at all!

Tag Spotlight with Errors Tag Spotlight with Errors

This illustrates the power of Tag Spotlight! Finding this pattern would be time-consuming without it, as we’d have to manually look through hundreds of traces to identify the pattern (and even then, there’s no guarantee we’d find it).

We’ve looked at errors, but what about latency? Let’s click on Latency near the top of the screen to find out.

Here, we can see that the requests with a poor credit score request are running slowly, with P50, P90, and P99 times of around 3 seconds, which is too long for our users to wait, and much slower than other requests.

We can also see that some requests with an exceptional credit score request are running slowly, with P99 times of around 5 seconds, though the P50 response time is relatively quick.

Tag Spotlight with Latency Tag Spotlight with Latency

Using Dynamic Service Maps

Now that we know the credit score category associated with the request can impact performance and error rates, let’s explore another feature that utilizes indexed tags: Dynamic Service Maps.

With Dynamic Service Maps, we can breakdown a particular service by a tag. For example, let’s click on APM, then click Explore to view the service map.

Click on creditcheckservice. Then, on the right-hand menu, click on the drop-down that says Breakdown, and select the credit.score.category tag.

At this point, the service map is updated dynamically, and we can see the performance of requests hitting creditcheckservice broken down by the credit score category:

Service Map Breakdown Service Map Breakdown

This view makes it clear that performance for good and fair credit scores is excellent, while poor and exceptional scores are much slower, and impossible scores result in errors.

Summary

Tag Spotlight has uncovered several interesting patterns for the engineers that own this service to explore further:

  • Why are all the impossible credit score requests resulting in error?
  • Why are all the poor credit score requests running slowly?
  • Why do some of the exceptional requests run slowly?

As an SRE, passing this context to the engineering team would be extremely helpful for their investigation, as it would allow them to track down the issue much more quickly than if we simply told them that the service was “sometimes slow”.

If you’re curious, have a look at the source code for the creditprocessorservice. You’ll see that requests with impossible, poor, and exceptional credit scores are handled differently, thus resulting in the differences in error rates and latency that we uncovered.

The behavior we saw with our application is typical for modern cloud-native applications, where different inputs passed to a service lead to different code paths, some of which result in slower performance or errors. For example, in a real credit check service, requests resulting in low credit scores may be sent to another downstream service to further evaluate risk, and may perform more slowly than requests resulting in higher scores, or encounter higher error rates.

Last Modified Sep 6, 2024

Use Tags for Monitoring

15 minutes  

Earlier, we created a Troubleshooting Metric Set on the credit.score.category tag, which allowed us to use Tag Spotlight with that tag and identify a pattern to explain why some users received a poor experience.

In this section of the workshop, we’ll explore a related concept: Monitoring MetricSets.

What are Monitoring MetricSets?

Monitoring MetricSets go beyond troubleshooting and allow us to use tags for alerting, dashboards and SLOs.

Create a Monitoring MetricSet

(note: your workshop instructor will do the following for you, but observe the steps)

Let’s navigate to Settings -> APM MetricSets, and click the edit button (i.e. the little pencil) beside the MetricSet for credit.score.category.

edit APM MetricSet edit APM MetricSet

Check the box beside Also create Monitoring MetricSet then click Start Analysis

Monitoring MetricSet Monitoring MetricSet

The credit.score.category tag appears again as a Pending MetricSet. After a few moments, a checkmark should appear. Click this checkmark to enable the Pending MetricSet.

pending APM MetricSet pending APM MetricSet

Using Monitoring MetricSets

This mechanism creates a new dimension from the tag on a bunch of metrics that can be used to filter these metrics based on the values of that new dimension. Important: To differentiate between the original and the copy, the dots in the tag name are replaced by underscores for the new dimension. With that the metrics become a dimension named credit_score_category and not credit.score.category.

Next, let’s explore how we can use this Monitoring MetricSet.

Last Modified Sep 6, 2024

Subsections of 7. Use Tags for Monitoring

Use Tags with Dashboards

5 minutes  

Dashboards

Navigate to Metric Finder, then type in the name of the tag, which is credit_score_category (remember that the dots in the tag name were replaced by underscores when the Monitoring MetricSet was created). You’ll see that multiple metrics include this tag as a dimension:

Metric Finder Metric Finder

By default, Splunk Observability Cloud calculates several metrics using the trace data it receives. See Learn about MetricSets in APM for more details.

By creating an MMS, credit_score_category was added as a dimension to these metrics, which means that this dimension can now be used for alerting and dashboards.

To see how, let’s click on the metric named service.request.duration.ns.p99, which brings up the following chart:

Service Request Duration Service Request Duration

Add filters for sf_environment, sf_service, and sf_dimensionalized. Then set the Extrapolation policy to Last value and the Display units to Nanosecond:

Chart with Seconds Chart with Seconds

With these settings, the chart allows us to visualize the service request duration by credit score category:

Duration by Credit Score Duration by Credit Score

Now we can see the duration by credit score category. In my example, the red line represents the exceptional category, and we can see that the duration for these requests sometimes goes all the way up to 5 seconds.

The orange represents the very good category, and has very fast response times.

The green line represents the poor category, and has response times between 2-3 seconds.

It may be useful to save this chart on a dashboard for future reference. To do this, click on the Save as… button and provide a name for the chart:

Save Chart As Save Chart As

When asked which dashboard to save the chart to, let’s create a new one named Credit Check Service - Your Name (substituting your actual name):

Save Chart As Save Chart As

Now we can see the chart on our dashboard, and can add more charts as needed to monitor our credit check service:

Credit Check Service Dashboard Credit Check Service Dashboard

Last Modified Sep 6, 2024

Use Tags with Alerting

3 minutes  

Alerts

It’s great that we have a dashboard to monitor the response times of the credit check service by credit score, but we don’t want to stare at a dashboard all day.

Let’s create an alert so we can be notified proactively if customers with exceptional credit scores encounter slow requests.

To create this alert, click on the little bell on the top right-hand corner of the chart, then select New detector from chart:

New Detector From Chart New Detector From Chart

Let’s call the detector Latency by Credit Score Category. Set the environment to your environment name (i.e. tagging-workshop-yourname) then select creditcheckservice as the service. Since we only want to look at performance for customers with exceptional credit scores, add a filter using the credit_score_category dimension and select exceptional:

Create New Detector Create New Detector

As an alert condition instead of “Static threshold” we want to select “Sudden Change” to make the example more vivid.

Alert Condition: Sudden Change Alert Condition: Sudden Change

We can then set the remainder of the alert details as we normally would. The key thing to remember here is that without capturing a tag with the credit score category and indexing it, we wouldn’t be able to alert at this granular level, but would instead be forced to bucket all customers together, regardless of their importance to the business.

Unless you want to get notified, we do not need to finish this wizard. You can just close the wizard by clicking the X on the top right corner of the wizard pop-up.

Last Modified Sep 6, 2024

Use Tags with Service Level Objectives

10 minutes  

We can now use the created Monitoring MetricSet together with Service Level Objectives a similar way we used them with dashboards and detectors/alerts before. For that we want to be clear about some key concepts:

Key Concepts of Service Level Monitoring

(skip if you know this)

ConceptDefinitionExamples
Service level indicator (SLI)An SLI is a quantitative measurement showing some health of a service, expressed as a metric or combination of metrics.Availability SLI: Proportion of requests that resulted in a successful response
Performance SLI: Proportion of requests that loaded in < 100 ms
Service level objective (SLO)An SLO defines a target for an SLI and a compliance period over which that target must be met. An SLO contains 3 elements: an SLI, a target, and a compliance period. Compliance periods can be calendar, such as monthly, or rolling, such as past 30 days.Availability SLI over a calendar period: Our service must respond successfully to 95% of requests in a month
Performance SLI over a rolling period: Our service must respond to 99% of requests in < 100 ms over a 7-day period
Service level agreement (SLA)An SLA is a contractual agreement that indicates service levels your users can expect from your organization. If an SLA is not met, there can be financial consequences.A customer service SLA indicates that 90% of support requests received on a normal support day must have a response within 6 hours.
Error budgetA measurement of how your SLI performs relative to your SLO over a period of time. Error budget measures the difference between actual and desired performance. It determines how unreliable your service might be during this period and serves as a signal when you need to take corrective action.Our service can respond to 1% of requests in >100 ms over a 7 day period.
Burn rateA unitless measurement of how quickly a service consumes the error budget during the compliance window of the SLO. Burn rate makes the SLO and error budget actionable, showing service owners when a current incident is serious enough to page an on-call responder.For an SLO with a 30-day compliance window, a constant burn rate of 1 means your error budget is used up in exactly 30 days.

Creating a new Service Level Objective

There is an easy to follow wizard to create a new Service Level Objective (SLO). In the left navigation just follow the link “Detectors & SLOs”. From there select the third tab “SLOs” and click the blue button to the right that says “Create SLO”.

Create new SLO Create new SLO

The wizard guides you through some easy steps. And if everything during the previous steps worked out, you will have no problems here. ;)

In our case we want to use Service & endpoint as our Metric type instead of Custom metric. We filter the Environment down to the environment that we are using during this workshop (i.e. tagging-workshop-yourname) and select the creditcheckservice from the Service and endpoint list. Our Indicator type for this workshop will be Request latency and not Request success.

Now we can select our Filters. Since we are using the Request latency as the Indicator type and that is a metric of the APM Service, we can filter on credit.score.category. Feel free to try out what happens, when you set the Indicator type to Request success.

Today we are only interested in our exceptional credit scores. So please select that as the filter.

Choose Service or Metric for SLO Choose Service or Metric for SLO

In the next step we define the objective we want to reach. For the Request latency type, we define the Target (%), the Latency (ms) and the Compliance Window. Please set these to 99, 100 and Last 7 days. This will give us a good idea what we are achieving already.

Here we will already be in shock or play around with the numbers to make it not so scary. Feel free to play around with the numbers to see how well we achieve the objective and how much we have left to burn.

Define Objective for SLO Define Objective for SLO

The third step gives us the chance to alert (aka annoy) people who should be aware about these SLOs to initiate countermeasures. These “people” can also be mechanism like ITSM systems or webhooks to initiate automatic remediation steps.

Activate all categories you want to alert on and add recipients to the different alerts.

Define Alerting for SLO Define Alerting for SLO

The next step is only the naming for this SLO. Have your own naming convention ready for this. In our case we would just name it creditchceckservice:score:exceptional:YOURNAME and click the Create-button BUT you can also just cancel the wizard by clicking anything in the left navigation and confirming to Discard changes.

Name and Save the SLO Name and Save the SLO

And with that we have (nearly) successfully created an SLO including the alerting in case we might miss or goals.

Last Modified Sep 6, 2024

Summary

2 minutes  

In this workshop, we learned the following:

  • What are tags and why are they such a critical part of making an application observable?
  • How to use OpenTelemetry to capture tags of interest from your application.
  • How to index tags in Splunk Observability Cloud and the differences between Troubleshooting MetricSets and Monitoring MetricSets.
  • How to utilize tags in Splunk Observability Cloud to find “unknown unknowns” using the Tag Spotlight and Dynamic Service Map features.
  • How to utilize tags for dashboards, alerting and service level objectives.

Collecting tags aligned with the best practices shared in this workshop will let you get even more value from the data you’re sending to Splunk Observability Cloud. Now that you’ve completed this workshop, you have the knowledge you need to start collecting tags from your own applications!

To get started with capturing tags today, check out how to add tags in various supported languages, and then how to use them to create Troubleshooting MetricSets so they can be analyzed in Tag Spotlight. For more help, feel free to ask a Splunk Expert.

Tip for Workshop Facilitator(s)

Once the workshop is complete, remember to delete the APM MetricSet you created earlier for the credit.score.category tag.

Last Modified Sep 6, 2024

Profiling Workshop

2 minutes   Author Derek Mitchell

Service Maps and Traces are extremely valuable in determining what service an issue resides in. And related log data helps provide detail on why issues are occurring in that service.

But engineers sometimes need to go even deeper to debug a problem that’s occurring in one of their services.

This is where features such as Splunk’s AlwaysOn Profiling and Database Query Performance come in.

AlwaysOn Profiling continuously collects stack traces so that you can discover which lines in your code are consuming the most CPU and memory.

And Database Query Performance can quickly identify long-running, unoptimized, or heavy queries and mitigate issues they might be causing.

In this workshop, we’ll explore:

  • How to debug an application with several performance issues.
  • How to use Database Query Performance to find slow-running queries that impact application performance.
  • How to enable AlwaysOn Profiling and use it to find the code that consumes the most CPU and memory.
  • How to apply fixes based on findings from Splunk Observability Cloud and verify the result.

The workshop uses a Java-based application called The Door Game hosted in Kubernetes. Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Last Modified Sep 6, 2024

Subsections of Profiling Workshop

Build the Sample Application

10 minutes  

Introduction

For this workshop, we’ll be using a Java-based application called The Door Game. It will be hosted in Kubernetes.

Pre-requisites

You will start with an EC2 instance and perform some initial steps in order to get to the following state:

  • Deploy the Splunk distribution of the OpenTelemetry Collector
  • Deploy the MySQL database container and populate data
  • Build and deploy the doorgame application container

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance.

You’ll be asked to enter a name for your environment. Please use profiling-workshop-yourname (where yourname is replaced by your actual name).

cd workshop/profiling
./1-deploy-otel-collector.sh
./2-deploy-mysql.sh
./3-deploy-doorgame.sh

Let’s Play The Door Game

Now that the application is deployed, let’s play with it and generate some observability data.

Get the external IP address for your application instance using the following command:

kubectl describe svc doorgame | grep "LoadBalancer Ingress"

The output should look like the following:

LoadBalancer Ingress:     52.23.184.60

You should be able to access The Door Game application by pointing your browser to port 81 of the provided IP address. For example:

http://52.23.184.60:81

You should be met with The Door Game intro screen:

Door Game Welcome Screen Door Game Welcome Screen

Click Let's Play to start the game:

Let’s Play Let’s Play

Did you notice that it took a long time after clicking Let's Play before we could actually start playing the game?

Let’s use Splunk Observability Cloud to determine why the application startup is so slow.

Last Modified Sep 6, 2024

Troubleshoot Game Startup

10 minutes  

Let’s use Splunk Observability Cloud to determine why the game started so slowly.

View your application in Splunk Observability Cloud

Note: when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. profiling-workshop-name).

If everything was deployed correctly, you should see doorgame displayed in the list of services:

APM Overview APM Overview

Click on Explore on the right-hand side to view the service map. We should the doorgame application on the service map:

Service Map Service Map

Notice how the majority of the time is being spent in the MySQL database. We can get more details by clicking on Database Query Performance on the right-hand side.

Database Query Performance Database Query Performance

This view shows the SQL queries that took the most amount of time. Ensure that the Compare to dropdown is set to None, so we can focus on current performance.

We can see that one query in particular is taking a long time:

select * from doorgamedb.users, doorgamedb.organizations

(do you notice anything unusual about this query?)

Let’s troubleshoot further by clicking on one of the spikes in the latency graph. This brings up a list of example traces that include this slow query:

Traces with Slow Query Traces with Slow Query

Click on one of the traces to see the details:

Trace with Slow Query Trace with Slow Query

In the trace, we can see that the DoorGame.startNew operation took 25.8 seconds, and 17.6 seconds of this was associated with the slow SQL query we found earlier.

What did we accomplish?

To recap what we’ve done so far:

  • We’ve deployed our application and are able to access it successfully.
  • The application is sending traces to Splunk Observability Cloud successfully.
  • We started troubleshooting the slow application startup time, and found a slow SQL query that seems to be the root cause.

To troubleshoot further, it would be helpful to get deeper diagnostic data that tells us what’s happening inside our JVM, from both a memory (i.e. JVM heap) and CPU perspective. We’ll tackle that in the next section of the workshop.

Last Modified Sep 6, 2024

Enable AlwaysOn Profiling

20 minutes  

Let’s learn how to enable the memory and CPU profilers, verify their operation, and use the results in Splunk Observability Cloud to find out why our application startup is slow.

Update the application configuration

We will need to pass additional configuration arguments to the Splunk OpenTelemetry Java agent in order to enable both profilers. The configuration is documented here in detail, but for now we just need the following settings:

SPLUNK_PROFILER_ENABLED="true"
SPLUNK_PROFILER_MEMORY_ENABLED="true"

Since our application is deployed in Kubernetes, we can update the Kubernetes manifest file to set these environment variables. Open the doorgame/doorgame.yaml file for editing, and ensure the values of the following environment variables are set to “true”:

- name: SPLUNK_PROFILER_ENABLED
  value: "true"
- name: SPLUNK_PROFILER_MEMORY_ENABLED
  value: "true"

Next, let’s redeploy the Door Game application by running the following command:

cd workshop/profiling
kubectl apply -f doorgame/doorgame.yaml

After a few seconds, a new pod will be deployed with the updated application settings.

Confirm operation

To ensure the profiler is enabled, let’s review the application logs with the following commands:

kubectl logs -l app=doorgame --tail=100 | grep JfrActivator

You should see a line in the application log output that shows the profiler is active:

[otel.javaagent 2024-02-05 19:01:12:416 +0000] [main] INFO com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active.```

This confirms that the profiler is enabled and sending data to the OpenTelemetry collector deployed in our Kubernetes cluster, which in turn sends profiling data to Splunk Observability Cloud.

Profiling in APM

Visit http://<your IP address>:81 and play a few more rounds of The Door Game.

Then head back to Splunk Observability Cloud, click on APM, and click on the doorgame service at the bottom of the screen.

Click on “Traces” on the right-hand side to load traces for this service. Filter on traces involving the doorgame service and the GET new-game operation (since we’re troubleshooting the game startup sequence):

New Game Traces New Game Traces

Selecting one of these traces brings up the following screen:

Trace with Call Stacks Trace with Call Stacks

You can see that the spans now include “Call Stacks”, which is a result of us enabling CPU and memory profiling earlier.

Click on the span named doorgame: SELECT doorgamedb, then click on CPU stack traces on the right-hand side:

Trace with CPU Call Stacks Trace with CPU Call Stacks

This brings up the CPU call stacks captured by the profiler.

Let’s open the AlwaysOn Profiler to review the CPU stack trace in more detail. We can do this by clicking on the Span link beside View in AlwaysOn Profiler:

Flamegraph and table Flamegraph and table

The AlwaysOn Profiler includes both a table and a flamegraph. Take some time to explore this view by doing some of the following:

  • click a table item and notice the change in flamegraph
  • navigate the flamegraph by clicking on a stack frame to zoom in, and a parent frame to zoom out
  • add a search term like splunk or jetty to highlight some matching stack frames

Let’s have a closer look at the stack trace, starting with the DoorGame.startNew method (since we already know that it’s the slowest part of the request)

com.splunk.profiling.workshop.DoorGame.startNew(DoorGame.java:24)
com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)
com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1168)
com.mysql.cj.NativeSession.execSQL(NativeSession.java:655)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:998)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:1065)
com.mysql.cj.protocol.a.NativeProtocol.readAllResults(NativeProtocol.java:1715)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1661)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:48)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:87)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1648)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:42)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:75)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:44)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:66)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:41)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:62)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:45)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:102)
com.mysql.cj.protocol.a.SimplePacketReader.readMessageLocal(SimplePacketReader.java:137)
com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:64)
java.io.FilterInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl$AppInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl.readApplicationRecord(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.lang.ThreadLocal.get(Unknown Source:0)

We can interpret the stack trace as follows:

  • When starting a new Door Game, a call is made to load user data.
  • This results in executing a SQL query to load the user data (which is related to the slow SQL query we saw earlier).
  • We then see calls to read data in from the database.

So, what does this all mean? It means that our application startup is slow since it’s spending time loading user data. In fact, the profiler has told us the exact line of code where this happens:

com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)

Let’s open the corresponding source file (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and look at this code in more detail:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

Here we can see the application logic in action. It establishes a connection to the database, then executes the SQL query we saw earlier:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

It then loops through each of the results, and loads each user into a HashMap object, which is a collection of User objects.

We have a good understanding of why the game startup sequence is so slow, but how do we fix it?

For more clues, let’s have a look at the other part of AlwaysOn Profiling: memory profiling. To do this, click on the Memory tab in AlwaysOn profiling:

Memory Profiling Memory Profiling

At the top of this view, we can see how much heap memory our application is using, the heap memory allocation rate, and garbage collection activity.

We can see that our application is using about 400 MB out of the max 1 GB heap size, which seems excessive for such a simple application. We can also see that some garbage collection occurred, which caused our application to pause (and probably annoyed those wanting to play the Game Door).

At the bottom of the screen, which can see which methods in our Java application code are associated with the most heap memory usage. Click on the first item in the list to show the Memory Allocation Stack Traces associated with the java.util.Arrays.copyOf method specifically:

Memory Allocation Stack Traces Memory Allocation Stack Traces

With help from the profiler, we can see that the loadUserData method not only consumes excessive CPU time, but it also consumes excessive memory when storing the user data in the HashMap collection object.

What did we accomplish?

We’ve come a long way already!

  • We learned how to enable the profiler in the Splunk OpenTelemetry Java instrumentation agent.
  • We learned how to verify in the agent output that the profiler is enabled.
  • We have explored several profiling related workflows in APM:
    • How to navigate to AlwaysOn Profiling from the troubleshooting view
    • How to explore the flamegraph and method call duration table through navigation and filtering
    • How to identify when a span has sampled call stacks associated with it
    • How to explore heap utilization and garbage collection activity
    • How to view memory allocation stack traces for a particular method

In the next section, we’ll apply a fix to our application to resolve the slow startup performance.

Last Modified Sep 6, 2024

Fix Application Startup Slowness

10 minutes  

In this section, we’ll use what we learned from the profiling data in Splunk Observability Cloud to resolve the slowness we saw when starting our application.

Examining the Source Code

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and focus on the following code:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

After speaking with a database engineer, you discover that the SQL query being executed includes a cartesian join:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

Cartesian joins are notoriously slow, and shouldn’t be used, in general.

Upon further investigation, you discover that there are 10,000 rows in the user table, and 1,000 rows in the organization table. When we execute a cartesian join using both of these tables, we end up with 10,000 x 1,000 rows being returned, which is 10,000,000 rows!

Furthermore, the query ends up returning duplicate user data, since each record in the user table is repeated for each organization.

So when our code executes this query, it tries to load 10,000,000 user objects into the HashMap, which explains why it takes so long to execute, and why it consumes so much heap memory.

Let’s Fix That Bug

After consulting the engineer that originally wrote this code, we determined that the join with the Organizations table was inadvertent.

So when loading the users into the HashMap, we simply need to remove this table from the query.

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and change the following line of code:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

to:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users";

Now the method should perform much more quickly, and less memory should be used, as it’s loading the correct number of users into the HashMap (10,000 instead of 10,000,000).

Rebuild and Redeploy Application

Let’s test our changes by using the following commands to re-build and re-deploy the Door Game application:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

Clicking Let's Play should take us to the game more quickly now (though performance could still be improved):

Choose Door Choose Door

Start the game a few more times, then return to Splunk Observability Cloud to confirm that the latency of the GET new-game operation has decreased.

What did we accomplish?

  • We discovered why our SQL query was so slow.
  • We applied a fix, then rebuilt and redeployed our application.
  • We confirmed that the application starts a new game more quickly.

In the next section, we’ll explore continue playing the game and fix any remaining performance issues that we find.

Last Modified Sep 6, 2024

Fix In Game Slowness

10 minutes  

Now that our game startup slowness has been resolved, let’s play several rounds of the Door Game and ensure the rest of the game performs quickly.

As you play the game, do you notice any other slowness? Let’s look at the data in Splunk Observability Cloud to put some numbers on what we’re seeing.

Review Game Performance in Splunk Observability Cloud

Navigate to APM then click on Traces on the right-hand side of the screen. Sort the traces by Duration in descending order:

Slow Traces Slow Traces

We can see that a few of the traces with an operation of GET /game/:uid/picked/:picked/outcome have a duration of just over five seconds. This explains why we’re still seeing some slowness when we play the app (note that the slowness is no longer on the game startup operation, GET /new-game, but rather a different operation used while actually playing the game).

Let’s click on one of the slow traces and take a closer look. Since profiling is still enabled, call stacks have been captured as part of this trace. Click on the child span in the waterfall view, then click CPU Stack Traces:

View Stack on Span View Stack on Span

At the bottom of the call stack, we can see that the thread was busy sleeping:

com.splunk.profiling.workshop.ServiceMain$$Lambda$.handle(Unknown Source:0)
com.splunk.profiling.workshop.ServiceMain.lambda$main$(ServiceMain.java:34)
com.splunk.profiling.workshop.DoorGame.getOutcome(DoorGame.java:41)
com.splunk.profiling.workshop.DoorChecker.isWinner(DoorChecker.java:14)
com.splunk.profiling.workshop.DoorChecker.checkDoorTwo(DoorChecker.java:30)
com.splunk.profiling.workshop.DoorChecker.precheck(DoorChecker.java:36)
com.splunk.profiling.workshop.Util.sleep(Util.java:9)
java.util.concurrent.TimeUnit.sleep(Unknown Source:0)
java.lang.Thread.sleep(Unknown Source:0)
java.lang.Thread.sleep(Native Method:0)

The call stack tells us a story – reading from the bottom up, it lets us describe what is happening inside the service code. A developer, even one unfamiliar with the source code, should be able to look at this call stack to craft a narrative like:

We are getting the outcome of a game. We leverage the DoorChecker to see if something is the winner, but the check for door two somehow issues a precheck() that, for some reason, is deciding to sleep for a long time.

Our workshop application is left intentionally simple – a real-world service might see the thread being sampled during a database call or calling into an un-traced external service. It is also possible that a slow span is executing a complicated business process, in which case maybe none of the stack traces relate to each other at all.

The longer a method or process is, the greater chance we will have call stacks sampled during its execution.

Let’s Fix That Bug

By using the profiling tool, we were able to determine that our application is slow when issuing the DoorChecker.precheck() method from inside DoorChecker.checkDoorTwo(). Let’s open the doorgame/src/main/java/com/splunk/profiling/workshop/DoorChecker.java source file in our editor.

By quickly glancing through the file, we see that there are methods for checking each door, and all of them call precheck(). In a real service, we might be uncomfortable simply removing the precheck() call because there could be unseen/unaccounted side effects.

Down on line 29 we see the following:

    private boolean checkDoorTwo(GameInfo gameInfo) {
        precheck(2);
        return gameInfo.isWinner(2);
    }

    private void precheck(int doorNum) {
        long extra = (int)Math.pow(70, doorNum);
        sleep(300 + extra);
    }

With our developer hat on, we notice that the door number is zero based, so the first door is 0, the second is 1, and the 3rd is 2 (this is conventional). The extra value is used as extra/additional sleep time, and it is computed by taking 70^doorNum (Math.pow performs an exponent calculation). That’s odd, because this means:

  • door 0 => 70^0 => 1ms
  • door 1 => 70^1 => 70ms
  • door 2 => 70^2 => 4900ms

We’ve found the root cause of our slow bug! This also explains why the first two doors weren’t ever very slow.

We have a quick chat with our product manager and team lead, and we agree that the precheck() method must stay but that the extra padding isn’t required. Let’s remove the extra variable and make precheck now read like this:

    private void precheck(int doorNum) {
        sleep(300);
    }

Now all doors will have a consistent behavior. Save your work and then rebuild and redeploy the application using the following command:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

What did we accomplish?

  • We found another performance issue with our application that impacts game play.
  • We used the CPU call stacks included in the trace to understand application behavior.
  • We learned how the call stack can tell us a story and point us to suspect lines of code.
  • We identified the slow code and fixed it to make it faster.
Last Modified Sep 6, 2024

Summary

3 minutes  

In this workshop, we accomplished the following:

  • We deployed our application and captured traces with Splunk Observability Cloud.
  • We used Database Query Performance to find a slow-running query that impacted the game startup time.
  • We enabled AlwaysOn Profiling and used it to confirm which line of code was causing the increased startup time and memory usage.
  • We found another application performance issue and used AlwaysOn Profiling again to find the problematic line of code.
  • We applied fixes for both of these issues and verified the result using Splunk Observability Cloud.

Enabling AlwaysOn Profiling and utilizing Database Query Performance for your applications will let you get even more value from the data you’re sending to Splunk Observability Cloud.

Now that you’ve completed this workshop, you have the knowledge you need to start collecting deeper diagnostic data from your own applications!

To get started with Database Query Performance today, check out Monitor Database Query Performance.

And to get started with AlwaysOn Profiling, check out Introduction to AlwaysOn Profiling for Splunk APM

For more help, feel free to ask a Splunk Expert.

Last Modified Sep 6, 2024

Optimize End User Experiences

90 minutes   Author Sarah Ware

How can we use Splunk Observability to get insight into end user experience, and proactively test scenarios to improve that experience?

Sections:

  • Set up basic Synthetic tests to understand availability and performance ASAP
    • Uptime test
    • API test
    • Single page Browser test
  • Explore RUM to understand our real users
  • Write advanced Synthetics tests based on what we’ve learned about our users and what we need them to do
  • Customize dashboard charts to capture our KPIs, show trends, and show data in context of our events
  • Create Detectors to alert on our KPIs
Tip

Keep in mind throughout the workshop: how can I prioritize activities strategically to get the fastest time to value for my end users and for myself/ my developers?

Context

As a reminder, we need frontend performance monitoring to capture everything that goes into our end user experience. If we’re just monitoring the backend, we’re missing all of the other resources that are critical to our users’ success. Read What the Fastly Outage Can Teach Us About Observability for a real world example. Click the image below to zoom in. What goes into the front end What goes into the front end

References

Throughout this workshop, we will see references to resources to help further understand end user experience and how to optimize it. In addition to Splunk Docs for supported features and Lantern for tips and tricks, Google’s web.dev and Mozilla are great resources.

Remember that the specific libraries, platforms, and CDNs you use often also have their own specific resources. For example React, Wordpress, and Cloudflare all have their own tips to improve performance.

Last Modified Oct 10, 2024

Subsections of Optimize End User Experiences

Synthetics

Let’s quickly set up some tests in Synthetics to immediately start understanding our end user experience, without waiting for real users to interact with our app.

We can capture not only the performance and availability of our own apps and endpoints, but also those third parties we rely on any time of the day or night.

Tip

If you find that your tests are being bot-blocked, see the docs for tips on how to allow Synthetic testing. if you need to test something that is not accessible externally, see private location instructions.

Last Modified Mar 19, 2024

Subsections of Synthetics

Uptime Test

5 minutes  

Introduction

The simplest way to keep an eye on endpoint availability is with an Uptime test. This lightweight test can run internally or externally around the world, as frequently as every minute. Because this is the easiest (and cheapest!) test to set up, and because this is ideal for monitoring availability of your most critical enpoints and ports, let’s start here.

Pre-requisites

  • Publicly accessible HTTP(S) endpoint(s) to test
  • Access to Splunk Observability Cloud
Last Modified Apr 2, 2024

Subsections of 1. Uptime Test

Creating a test

  1. Open Synthetics Synthetics navigation item Synthetics navigation item

  2. Click the Add new test button on the right side of the screen, then select Uptime and HTTP test. image image

  3. Name your test with your team name (provided by your workshop instructor), your initials, and any other details you’d like to include, like geographic region.

  4. For now let’s test a GET request. Fill in the URL field. You can use one of your own, or one of ours like https://online-boutique-eu.splunko11y.com, https://online-boutique-us.splunko11y.com, or https://www.splunk.com.

  5. Click Try now to validate that the endpoint is accessible before the selected location before saving the test. Try now does not count against your subscription usage, so this is a good practice to make sure you’re not wasting real test runs on a misconfigured test. image image

Tip

A common reason for Try now to fail is that there is a non-2xx response code. If that is expected, add a Validation for the correct response code.

  1. Add any additional validations needed, for example: response code, response header, and response size. Advanced settings for test configuration Advanced settings for test configuration

  2. Add and remove any locations you’d like. Keep in mind where you expect your endpoint to be available.

  3. Change the frequency to test your more critical endpoints more often, up to one minute. image image

  4. Make sure “Round-robin” is on so the test will run from one location at a time, rather than from all locations at once.

    • If an endpoint is highly critical, think about if it is worth it to have all locations tested at the same time every single minute. If you have automations built in with a webhook from a detector, or if you have strict SLAs you need to track, this could be worth it to have as much coverage as possible. But if you are doing more manual investigation, or if this is a less critical endpoint, you could be wasting test runs that are executing while an issue is being investigated.
    • Remember that your license is based on the number of test runs per month. Turning Round-robin off will multiply the number of test runs by the number of locations you have selected.
  5. When you are ready for the test to start running, make sure “Active” is on, then scroll down and click Submit to save the test configuration. submit button submit button

Now the test will start running with your saved configuration. Take a water break, then we’ll look at the results!

Last Modified Nov 7, 2024

Understanding results

  1. From the Synthetics landing page, click into a test to see its summary view and play with the Performance KPIs chart filters to see how you can slice and dice your data. This is a good place to get started understanding trends. Later, we will see what custom charts look like, so you can tailor dashboards to the KPIs you care about most. KPI chart filters KPI chart filters

    Workshop Question: Using the Performance KPIs chart

    What metrics are available? Is your data consistent across time and locations? Do certain locations run slower than others? Are there any spikes or failures?

  2. Click into a recent run either in the chart or in the table below. run results chart run results chart

  3. If there are failures, look at the response to see if you need to add a response code assertion (302 is a common one), if there is some authorization needed, or different request headers added. Here we have information about this particular test run including if it succeeded or failed, the location, timestamp, and duration in addition to the other Uptime test metrics. Click through to see the response, request, and connection info as well. uptime test result uptime test result If you need to edit the test for it to run successfully, click the test name in the top left breadcrumb on this run result page, then click Edit test on the top right of the test overview page. Remember to scroll down and click Submit to save your changes after editing the test configuration.

  4. In addition to the test running successfully, there are other metrics to measure the health of your endpoints. For example, Time to First Byte(TTFB) is a great indicator of performance, and you can optimize TTFB to improve end user experience.

  5. Go back to the test overview page and change the Performance KPIs chart to display First Byte time. Once the test has run for a long enough time, expanding the time frame will draw the data points as lines to better see trends and anomalies, like in the example below. Performance KPIs for Uptime Tests Performance KPIs for Uptime Tests

In the example above, we can see that TTFB varies consistently between locations. Knowing this, we can keep location in mind when reporting on metrics. We could also improve the experience, for example by serving users in those locations an endpoint hosted closer to them, which should reduce network latency. We can also see some slight variations in the results over time, but overall we already have a good idea of our baseline for this endpoint’s KPIs. When we have a baseline, we can alert on worsening metrics as well as visualize improvements.

Tip

We are not setting a detector on this test yet, to make sure it is running consistently and successfully. If you are testing a highly critical endpoint and want to be alerted on it ASAP (and have tolerance for potential alert noise), jump to Single Test Detectors.

Once you have your Uptime test running successfully, let’s move on to the next test type.

Last Modified Nov 6, 2024

API Test

5 minutes  

The API test provides a flexible way to check the functionality and performance of API endpoints. The shift toward API-first development has magnified the necessity to monitor the back-end services that provide your core front-end functionality.

Whether you’re interested in testing multi-step API interactions or you want to gain visibility into the performance of your endpoints, the API Test can help you accomplish your goals.

This excercise will walk through a multi-step test on the Spotify API. You can also use it as a reference to build tests on your own APIs or on those of your critical third parties.

API test result API test result

Last Modified Nov 7, 2024

Subsections of 2. API Test

Global Variables

Global variables allow us to use stored strings in multiple tests, so we only need to update them in one place.

View the global variable that we’ll use to perform our API test. Click on Global Variables under the cog icon. The global variable named env.encoded_auth will be the one that we’ll use to build the spotify API transaction.

global variables global variables

Last Modified Nov 7, 2024

Create new API test

Create a new API test by clicking on the Add new test button and select API test from the dropdown. Name the test using your team name, your initials, and Spotify API e.g. [Daisy] RWC - Spotify API

new API test new API test

Last Modified Nov 7, 2024

Authentication Request

Click on + Add requests and enter the request step name e.g. Authenticate with Spotify API.

placeholder placeholder

Expand the Request section, from the drop-down change the request method to POST and enter the following URL:

https://accounts.spotify.com/api/token

In the Payload body section enter the following:

grant_type=client_credentials

Next, add two + Request headers with the following key/value pairings:

  • CONTENT-TYPE: application/x-www-form-urlencoded
  • AUTHORIZATION: Basic {{env.encoded_auth}}

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.access_token as access_token

This will parse the JSON payload that is received from the Spotify API, extract the access token and store it as a custom variable.

Add request payload token Add request payload token

Last Modified Nov 7, 2024

Search Request

Click on + Add Request to add the next step. Name the step Search for Tracks named “Up around the bend”.

Expand the Request section and change the request method to GET and enter the following URL:

https://api.spotify.com/v1/search?q=Up%20around%20the%20bend&type=track&offset=0&limit=5

Next, add two request headers with the following key/value pairings:

  • CONTENT-TYPE: application/json
  • AUTHORIZATION: Bearer {{custom.access_token}}
    • This uses the custom variable we created in the previous step!

Add search request Add search request

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.tracks.items[0].id as track.id

Add search payload Add search payload

To validate the test before saving, scroll to the top and change the location as needed. Click Try now. See the docs for more information on the try now feature.

try now try now

When the validation is successful, click on < Return to test to return to the test configuration page. And then click Save to save the API test.

Extra credit

Have more time to work on this test? Take a look at the Response Body in one of your run results. What additional steps would make this test more thorough? Edit the test, and use the Try now feature to validate any changes you make before you save the test.

Last Modified Nov 7, 2024

View results

Wait for a few minutes for the test to provision and run. Once you see the test has run successfully, click on the run to view the results:

API test result API test result

Resources

Last Modified Nov 7, 2024

Single Page Browser Test

5 minutes  

We have started testing our endpoints, now let’s test the front end browser experience.

Starting with a single page browser test will let us capture how first- and third-party resources impact how our end users experience our browser-based site. It also allows us to start to understand our user experience metrics before introducing the complexity of multiple steps in one test.

A page where your users commonly “land” is a good choice to start with a single page test. This could be your site homepage, a section main page, or any other high-traffic URL that is important to you and your end users.

  1. Click Create new test and select Browser test Create new browser test Create new browser test

  2. Include your team name and initials in the test name. Add to the Name and Custom properties to describe the scope of the test (like Desktop for device type). Then click + Edit steps Browser test content fields Browser test content fields

  3. Change the transaction label (top left) and step name (on the right) to something readable that describes the step. Add the URL you’d like to test. Your workshop instructor can provide you with a URL as well. In the below example, the transaction is “Home” and the step name is “Go to homepage”. Transaction and step label Transaction and step label

  4. To validate the test, change the location as needed and click Try now. See the docs for more information on the try now feature. browser test try now buttons browser test try now buttons

  5. Wait for the test validation to complete. If the test validation failed, double check your URL and test location and try again. With Try now you can see what the result of the test will be if it were saved and run as-is. Try Now browser test results Try Now browser test results

  6. Click < Return to test to continue the configuration. Return to test button Return to test button

  7. Edit the locations you want to use, keeping in mind any regional rules you have for your site.

    Browser test details Browser test details

  8. You can edit the Device and Frequency or leave them at their default values for now. Click Submit at the bottom of the form to save the test and start running it.

    browser test submit button browser test submit button

Bonus Exercise

Have a few spare seconds? Copy this test and change just the title and device type, and save. Now you have visibility into the end user experience on another device and connection speed!

While our Synthetic tests are running, let’s see how RUM is instrumented to start getting data from our real users.

Last Modified Nov 7, 2024

RUM

15 minutes  

With RUM instrumented, we will be able to better understand our end users, what they are doing, and what issues they are encountering.

This workshop walks through how our demo site is instrumented and how to interpret the data. If you already have a RUM license, this will help you understand how RUM works and how you can use it to optimize your end user experience.

Tip

Our Docs also contain guidance such as scenarios using Splunk RUM and demo applications to test out RUM for mobile apps.

Last Modified Apr 2, 2024

Subsections of RUM

Overview

The aim of this Splunk Real User Monitoring (RUM) workshop is to let you:

  • Shop for items on the Online Boutique to create traffic, and create RUM User Sessions1 that you can view in the Splunk Observability Suite.
  • See an overview of the performance of all your application(s) in the Application Summary Dashboard
  • Examine the performance of a specific website with RUM metrics.

In order to reach this goal, we will use an online boutique to order various products. While shopping on the online boutique you will create what is called a User Session.

You may encounter some issues with this web site, and you will use Splunk RUM to identify the issues, so they can be resolved by the developers.

The workshop host will provide you with a URL for an online boutique store that has RUM enabled.

Each of these Online Boutiques are also being visited by a few synthetic users; this will allow us to generate more live data to be analyzed later.


  1. A RUM User session is a “recording” of a collection of user interactions on an application, basically collecting a website or app’s performance measured straight from the browser or Mobile App of the end user. To do this a small amount of JavaScript is embedded in each page. This script then collects data from each user as he or she explores the page, and transfers that data back for analysis. ↩︎

Last Modified Apr 2, 2024

RUM instrumentation in a browser app

  • Check the HEAD section of the Online-boutique webpage in your browser
  • Find the code that instruments RUM

1. Browse to the Online Boutique

Your workshop instructor will provide you with the Online Boutique URL that has RUM installed so that you can complete the next steps.

2. Inspecting the HTML source

The changes needed for RUM are placed in the <head> section of the hosts Web page. Right click to view the page source or to inspect the code. Below is an example of the <head> section with RUM:

Online Boutique Online Boutique

This code enables RUM Tracing, Session Replay, and Custom Events to better understand performance in the context of user workflows:

  • The first part is to indicate where to download the Splunk Open Telemetry Javascript file from: https://cdn.signalfx.com/o11y-gdi-rum/latest/splunk-otel-web.js (this can also be hosted locally if so required).
  • The next section defines the location where to send the traces to in the beacon url: {beaconUrl: "https://rum-ingest.eu0.signalfx.com/v1/rum"
  • The RUM Access Token: rumAuth: "<redacted>".
  • Identification tags app and environment to indentify in the SPLUNK RUM UI e.g. app: "online-boutique-us-store", environment: "online-boutique-us"} (these values will be different in your workshop)

The above lines 21 and 23-30 are all that is required to enable RUM on your website!

Lines 22 and 31-34 are optional if you want Session Replay instrumented.

Line 36-39 var tracer=Provider.getTracer('appModuleLoader'); will add a Custom Event for every page change, allowing you to better track your website conversions and usage. This may or may not be instrumented for this workshop.

Exercise

Time to shop! Take a minute to open the workshop store URL in as many browsers and devices as you’d like, shop around, add items to cart, checkout, and feel free to close the shopping browsers when you’re finished. Keep in mind this is a lightweight demo shop site, so don’t be alarmed if the cart doesn’t match the item you picked!

Last Modified May 16, 2024

RUM Landing Page

  • Visit the RUM landing page and and check the overview of the performance of all your RUM enabled applications with the Application Summary Dashboard (Both Mobile and Web based)

1. Visit the RUM Landing Page

Login into Splunk Observability. From the left side menu bar select RUM RUM-ico RUM-ico. This will bring you to your the RUM Landing Page.

The goal of this page is to give you in a single page, a clear indication of the health, performance and potential errors found in your application(s) and allow you to dive deeper into the information about your User Sessions collected from your web page/App. You will have a pane for each of your active RUM applications. (The view below is the default expanded view)

RUM-App-sum RUM-App-sum

If you have multiple applications, (which will be the case when every attendee is using their own ec2 instance for the RUM workshop), the pane view may be automatically reduced by collapsing the panes as shown below:

RUM-App-sum-collapsed RUM-App-sum-collapsed

You can expanded a condensed RUM Application Summary View to the full dashboard by clicking on the small browser RUM-browser RUM-browser or Mobile RUM-mobile RUM-mobileicon. (Depending on the type of application: Mobile or Browser based) on the left in front of the applications name, highlighted by the red arrow.

First find the right application to use for the workshop:

If you are participating in a stand alone RUM workshop, the workshop leader will tell you the name of the application to use, in the case of a combined workshop, it will follow the naming convention we used for IM and APM and use the ec2 node name as a unique id like jmcj-store as shown as the last app in the screenshot above.

2. Configure the RUM Application Summary Dashboard Header Section

RUM Application Summary Dashboard consists of 6 major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the Time Window you’re reviewing (You are looking at the past 15 minutes by default)
  • A drop down to select the Environment1 you want to look at. This allows you to focus on just the subset of applications belonging to that environment, or Select all to view all available.
  • A drop down list with the various Apps being monitored. You can use the one provided by the workshop host or select your own. This will focus you on just one application.
  • A drop down to select the Source, Browser or Mobile applications to view. For the Workshop leave All selected.
  • A hamburger menu located at the right of the header allowing you to configure some settings of your Splunk RUM application. (We will visit this in a later section).

RUM-SummaryHeader RUM-SummaryHeader

For the workshop lets do a deeper dive into the Application Summary screen in the next section: Check Health Browser Application


A common application deployment pattern is to have multiple, distinct application environments that don’t interact directly with each other but that are all being monitored by Splunk APM or RUM: for instance, quality assurance (QA) and production environments, or multiple distinct deployments in different datacenters, regions or cloud providers.


  1. A deployment environment is a distinct deployment of your system or application that allows you to set up configurations that don’t overlap with configurations in other deployments of the same application. Separate deployment environments are often used for different stages of the development process, such as development, staging, and production. ↩︎

Last Modified Apr 2, 2024

Check Browser Applications health at a glance

  • Get familiar with the UI and options available from this landing page
  • Identify Page Views/JavaScript Errors and Request/Errors in a single view
    Check the Web Vitals metrics and any Detector that has fired for in relation to your Browser Application

Application Summary Dashboard

1.Header Bar

As seen in the previous section the RUM Application Summary Dashboard consists of 5 major sections.
The first section is the selection header, where you can collapse the Pane via the RUM-browser RUM-browser Browser icon or the > in front of the application name, which is jmcj-store in the example below. It also provides access to the Application Overview page if you click the link with your application name which is jmcj-store in the example below.

Further, you can also open the Application Overview or App Health Dashboard via the triple dot trippleburger trippleburger menu on the right.

RUM-SummaryHeader RUM-SummaryHeader

For now, let’s look at the high level information we get on the application summary dashboard.

The RUM Application Summary Dashboard is focused on providing you with at a glance highlights of the status of your application.

2. Page Views / JavaScript Errors & Network Requests / Errors

The first section shows Page Views / JavaScript Errors, & Network Requests and Errors charts show the quantity and trend of these issues in your application. This could be Javascript errors, or failed network calls to back end services.

RUM-chart RUM-chart

In the example above you can see that there are no failed network calls in the Network chart, but in the Page View chart you can see that a number of pages do experience some errors. These are often not visible for regular users, but can seriously impact the performance of your web site.

You can see the count of the Page Views / Network Requests / Errors by hovering over the charts.

RUM-chart-clicked RUM-chart-clicked

3. JavaScript Errors

With the second section of the RUM Application Summary Dashboard we are showing you an overview of the JavaScript errors occurring in your application, along with a count of each error.

RUM-javascript RUM-javascript

In the example above you can see there are three JavaScript errors, one that appears 29 times in the selected time slot, and the other two each appear 12 times.

If you click on one of the errors a pop-out opens that will show a summary (below) of the errors over time, along with a Stack Trace of the JavaScript error, giving you an indication of where the problems occurred. (We will see this in more detail in one of the following sections)

RUM-javascript-chart RUM-javascript-chart

4. Web Vitals

The next section of the RUM Application Summary Dashboard is showing you Google’s Core Web Vitals, three metrics that are not only used by Google in its search ranking system, but also quantify end user experience in terms of loading, interactivity, and visual stability.

WEB-vitals WEB-vitals

As you can see our site is well behaved and scores Good for all three Metrics. These metrics can be used to identify the effect changes to your application have, and help you improve the performance of your site.

If you click on any of the Metrics shown in the Web Vitals pane you will be taken to the corresponding Tag Spotlight Dashboard. e.g. clicking on the Largest Contentful Paint (LCP) chartlet, you will be taken to a dashboard similar to the screen shot below, that gives you timeline and table views for how this metric has performed. This should allow you to spot trends and identify where the problem may be more common, such as an operating system, geolocation, or browser version.

WEB-vitals-tag WEB-vitals-tag

5. Most Recent Detectors

The final section of the RUM Application Summary Dashboard is focused on providing you an overview of recent detectors that have triggered for your application. We have created a detector for this screen shot but your pane will be empty for now. We will add some detectors to your site and make sure they are triggered in one of the next sections.

detectors detectors

In the screen shot you can see we have a critical alert for the RUM Aggregated View Detector, and a Count, how often this alert has triggered in the selected time window. If you happen to have an alert listed, you can click on the name of the Alert (that is shown as a blue link) and you will be taken to the Alert Overview page showing the details of the alert (Note: this will move you away from the current page, Please use the Back option of your browser to return to the overview page).

alert alert


Exercise

Please take a few minutes to experiment with the RUM Application Summary Dashboard and the underlying chart and dashboards before going on to the next section.

Last Modified Apr 2, 2024

Analyzing RUM Metrics

  • See RUM Metrics and Session information in the RUM UI
  • See correlated APM traces in the RUM & APM UI

1. RUM Overview Pages

From your RUM Application Summary Dashboard you can see detailed information by opening the Application Overview Page via the tripple dot trippleburger trippleburger menu on the right by selecting Open Application Overview or by clicking the link with your application name which is jmcj-rum-app in the example below.

RUM-SummaryHeader RUM-SummaryHeader

This will take you to the RUM Application Overview Page screen as shown below.

RUM app overview with UX metrics RUM app overview with UX metrics

2. RUM Browser Overview

2.1. Header

The RUM UI consists of five major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the time window you’re reviewing (You are looking at the past 15 minutes in this case)
  • A drop down to select the Comparison window (You are comparing current performance on a rolling window - in this case compared to 1 hour ago)
  • A drop down with the available Environments to view
  • A drop down list with the Various Web apps
  • Optionally a drop down to select Browser or Mobile metrics (Might not be available in your workshop)

RUM-Header RUM-Header

2.2. UX Metrics

By default, RUM prioritizes the metrics that most directly reflect the experience of the end user.

Additional Tags

All of the dashboard charts allow us to compare trends over time, create detectors, and click through to further diagnose issues.

First, we see page load and route change information, which can help us understand if something unexpected is impacting user traffic trends.

Page load and route change charts Page load and route change charts

Next, Google has defined Core Web Vitals to quantify the user experience as measured by loading, interactivity, and visual stability. Splunk RUM builds in Google’s thresholds into the UI, so you can easily see if your metrics are in an acceptable range.

Core Web Vitals charts Core Web Vitals charts

  • Largest Contentful Paint (LCP), measures loading performance. How long does it take for the largest block of content in the viewport to load? To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID), measures interactivity. How long does it take to be able to interact with the app? To provide a good user experience, pages should have a FID of 100 milliseconds or less.
  • Cumulative Layout Shift (CLS), measures visual stability. How much does the content move around after the initial load? To provide a good user experience, pages should maintain a CLS of 0.1. or less.

Improving Web Vitals is a key component to optimizing your end user experience, so being able to quickly understand them and create detectors if they exceed a threshold is critical.

Google has some great resources if you want to learn more, for example the business impact of Core Web Vitals.

2.3. Front-end health

Common causes of frontend issues are javascript errors and long tasks, which can especially affect interactivity. Creating detectors on these indicators helps us investigate interactivity issues sooner than our users report it, allowing us to build workarounds or roll back related releases faster if needed. Learn more about optimizing long tasks for better end user experience!

JS error charts JS error charts Long task charts Long task charts

2.4. Back-end health

Common back-end issues affecting user experience are network issues and resource requests. In this example, we clearly see a spike in Time To First Byte that lines up with a resource request spike, so we already have a good starting place to investigate.

Back-end health charts Back-end health charts

  • Time To First Byte (TTFB), measures how long it takes for a client’s browser to receive the first byte of the response from the server. The longer it takes for the server to process the request and send a response, the slower your visitors’ browser is at displaying your page.
Last Modified Apr 2, 2024

Analyzing RUM Tags in the Tag Spotlight view

  • Look into the Metrics views for the various endpoints and use the Tags sent via the Tag spotlight for deeper analysis

1. Find an url for the Cart endpoint

From the RUM Overview page, please select the url for the Cart endpoint to dive deeper into the information available for this endpoint.

RUM-Cart2 RUM-Cart2

Once you have selected and clicked on the blue url, you will find yourself in the Tag Spotlight overview

RUM-Tag RUM-Tag

Here you will see all of the tags that have been sent to Splunk RUM as part of the RUM traces. The tags displayed will be relevant to the overview that you have selected. These are generic Tags created automatically when the Trace was sent, and additional Tags you have added to the trace as part of the configuration of your website.

Additional Tags

We are already sending two additional tags, you have seen them defined in the Beacon url that was added to your website: app: "[nodename]-store", environment: "[nodename]-workshop" in the first section of this workshop! You can add additional tags in a similar way.

In our example we have selected the Page Load view as shown here:

RUM-Header RUM-Header

You can select any of the following Tag views, each focused on a specific metric.

RUM-views RUM-views


2. Explore the information in the Tag Spotlight view

The Tag spotlight is designed to help you identify problems, either through the chart view,, where you may quickly identify outliers or via the TAGs.

In the Page Load view, if you look at the Browser, Browser Version & OS Name Tag views,you can see the various browser types and versions, as well as for the underlying OS.

This makes it easy to identify problems related to specific browser or OS versions, as they would be highlighted.

RUM-Tag2 RUM-Tag2

In the above example you can see that Firefox had the slowest response, various Browser versions ( Chrome) that have different response times and the slow response of the Android devices.

A further example are the regional Tags that you can use to identify problems related to ISP or locations etc. Here you should be able to find the location you have been using to access the Online Boutique. Drill down by selecting the town or country you are accessing the Online Boutique from by clicking on the name as shown below (City of Amsterdam):

RUM-click RUM-click

This will select only the sessions relevant to the city selected as shown below:

RUM-Adam RUM-Adam

By selecting the various Tag you build up a filter, you can see the current selection below

RUM-Adam RUM-Adam

To clear the filter and see every trace click on Clear All at the top right of the page.

If the overview page is empty or shows RUM-Adam RUM-Adam, no traces have been received in the selected timeslot. You need to increase the time window at the top left. You can start with the Last 12 hours for example.

You can then use your mouse to select the time slot you want like show in the view below and activate that time filter by clicking on the little spyglass icon.

RUM-time RUM-time

Last Modified Nov 7, 2024

Analyzing RUM Sessions

  • Dive into RUM Session information in the RUM UI
  • Identify Javascript errors in the Span of an user interaction

1. Drill down in the Sessions

After you have analyzed the information and drilled down via the Tag Spotlight to a subset of the traces, you can view the actual session as it was run by the end-user’s browser.

You do this by clicking on the link User Sessions as shown below:

RUM-Header RUM-Header

This will give you a list of sessions that matched both the time filter and the subset selected in the Tag Profile.

Select one by clicking on the session ID, It is a good idea to select one that has the longest duration (preferably over 700 ms).

RUM-Header RUM-Header

Once you have selected the session, you will be taken to the session details page. As you are selecting a specific action that is part of the session, you will likely arrive somewhere in the middle of the session, at the moment of the interaction.

You can see the URL that you selected earlier is where we are focusing on in the waterfall.

RUM-Session-Tag RUM-Session-Tag

Scroll down a little bit on the page, so you see the end of the operation as shown below.

RUM-Session-info RUM-Session-info

You can see that we have received a few Javascript Console errors that may not have been detected or visible to the end users. To examine these in more detail click on the middle one that says: *Cannot read properties of undefined (reading ‘Prcie’)

RUM-Session-info RUM-Session-info

This will cause the page to expand and show the Span detail for this interaction, It will contain a detailed error.stack you can pass on the developer to solve the issue. You may have noticed when buying in the Online Boutique that the final total always was $0.00.

RUM-Session-info RUM-Session-info

Last Modified Nov 7, 2024

Advanced Synthetics

30 minutes  

Introduction

This workshop walks you through using the Chrome DevTools Recorder to create a synthetic test on a Splunk demonstration environment or on your own public website.

The exported JSON from the Chrome DevTools Recorder will then be used to create a Splunk Synthetic Monitoring Real Browser Test.

Pre-requisites

  • Google Chrome Browser installed
  • Publicly browser-accessible URL
  • Access to Splunk Observability Cloud

Supporting resources

  1. Lantern: advanced Selectors for multi-step browser tests
  2. Chrome for Developers DevTools Tips
  3. web.dev Core Web Vitals reference
Last Modified Apr 2, 2024

Subsections of Advanced Synthetics

Record a test

Write down a short user journey you want to test. Remember: smaller bites are easier to chew! In other words, get started with just a few steps. This is easier not only to create and maintain the test, but also to understand and act on the results. Test the essential features to your users, like a support contact form, login widget, or date picker.

Note

Record the test in the same type of viewport that you want to run it. For example, if you want to run a test on a mobile viewport, narrow your browser width to mobile and refresh before starting the recording. This way you are capturing the correct elements that could change depending on responsive style rules.

Open your starting URL in Chrome Incognito. This is important so you’re not carrying cookies into the recording, which we won’t set up in the Synthetic test by default. If you workshop instructor does not have a custom URL, feel free to use https://online-boutique-eu.splunko11y.com or https://online-boutique-us.splunko11y.com, which are in the examples below.

Open the Chrome DevTools Recorder

Next, open the Developer Tools (in the new tab that was opened above) by pressing Ctrl + Shift + I on Windows or Cmd + Option + I on a Mac, then select Recorder from the top-level menu or the More tools flyout menu.

Open Recorder Open Recorder

Note

Site elements might change depending on viewport width. Before recording, set your browser window to the correct width for the test you want to create (Desktop, Tablet, or Mobile). Change the DevTools “dock side” to pop out as a separate window if it helps.

Create a new recording

With the Recorder panel open in the DevTools window. Click on the Create a new recording button to start.

Recorder Recorder

For the Recording Name use your initials to prefix the name of the recording e.g. <your initials> - <website name>. Click on Start Recording to start recording your actions.

Recording Name Recording Name

Now that we are recording, complete a few actions on the site. An example for our demo site is:

  • Click on Vintage Camera Lens
  • Click on Add to Cart
  • Click on Place Order
  • Click on End recording in the Recorder panel.

End Recording End Recording

Export the recording

Click on the Export button:

Export button Export button

Select JSON as the format, then click on Save

Export JSON Export JSON

Save JSON Save JSON

Congratulations! You have successfully created a recording using the Chrome DevTools Recorder. Next, we will use this recording to create a Real Browser Test in Splunk Synthetic Monitoring.


{
    "title": "RWC - Online Boutique",
    "steps": [
        {
            "type": "setViewport",
            "width": 1430,
            "height": 1016,
            "deviceScaleFactor": 1,
            "isMobile": false,
            "hasTouch": false,
            "isLandscape": false
        },
        {
            "type": "navigate",
            "url": "https://online-boutique-eu.splunko11y.com/",
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/",
                    "title": "Online Boutique"
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ],
                [
                    "xpath//html/body/main/div/div/div[2]/div[2]/div/a/div"
                ],
                [
                    "pierce/div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ]
            ],
            "offsetY": 170,
            "offsetX": 180,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/product/66VCHSJNUP",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/ADD TO CART"
                ],
                [
                    "button"
                ],
                [
                    "xpath//html/body/main/div[1]/div/div[2]/div/form/div/button"
                ],
                [
                    "pierce/button"
                ],
                [
                    "text/Add to Cart"
                ]
            ],
            "offsetY": 35.0078125,
            "offsetX": 46.4140625,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/PLACE ORDER"
                ],
                [
                    "div > div > div.py-3 button"
                ],
                [
                    "xpath//html/body/main/div/div/div[4]/div/form/div[4]/button"
                ],
                [
                    "pierce/div > div > div.py-3 button"
                ],
                [
                    "text/Place order"
                ]
            ],
            "offsetY": 29.8125,
            "offsetX": 66.8203125,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart/checkout",
                    "title": ""
                }
            ]
        }
    ]
}
Last Modified Oct 10, 2024

Create a Browser Test

In Splunk Observability Cloud, navigate to Synthetics and click on Add new test.

From the dropdown select Browser test.

Add new test Add new test

You will then be presented with the Browser test content configuration page.

New Test New Test

Last Modified Nov 8, 2024

Import JSON

To begin configuring our test, we need to import the JSON that we exported from the Chrome DevTools Recorder. To enable the Import button, we must first give our test a name e.g. [<your team name>] <your initials> - Online Boutique.

Browser edit form Browser edit form

Once the Import button is enabled, click on it and either drop the JSON file that you exported from the Chrome DevTools Recorder or upload the file.

Import JSON Import JSON

Once the JSON file has been uploaded, click on Continue to edit steps

Import complete message Import complete message

Before we make any edits to the test, let’s first configure the settings, click on < Return to test

Return to test button in the browser test editor Return to test button in the browser test editor

Last Modified Nov 8, 2024

Test settings

The simple settings allow you to configure the basics of the test:

  • Name: The name of the test (e.g. RWC - Online Boutique).
  • Details:
    • Locations: The locations where the test will run from.
    • Device: Emulate different devices and connection speeds. Also, the viewport will be adjusted to match the chosen device.
    • Frequency: How often the test will run.
    • Round-robin: If multiple locations are selected, the test will run from one location at a time, rather than all locations at once.
    • Active: Set the test to active or inactive.

For this workshop, we will configure the locations that we wish to monitor from. Click in the Locations field and you will be presented with a list of global locations (over 50 in total).

Global Locations Global Locations

Select the following locations:

  • AWS - N. Virginia
  • AWS - London
  • AWS - Melbourne

Once complete, scroll down and click on Click on Submit to save the test.

The test will now be scheduled to run every 5 minutes from the 3 locations that we have selected. This does take a few minutes for the schedule to be created.

So while we wait for the test to be scheduled, click on Edit test so we can go through the Advanced settings.

Last Modified Nov 8, 2024

Advanced Test Settings

Click on Advanced, these settings are optional and can be used to further configure the test.

Note

In the case of this workshop, we will not be using any of these settings; this is for informational purposes only.

Advanced Settings Advanced Settings

  • Security:
    • TLS/SSL validation: When activated, this feature is used to enforce the validation of expired, invalid hostname, or untrusted issuer on SSL/TLS certificates.
    • Authentication: Add credentials to authenticate with sites that require additional security protocols, for example from within a corporate network. By using concealed global variables in the Authentication field, you create an additional layer of security for your credentials and simplify the ability to share credentials across checks.
  • Custom Content:
    • Custom headers: Specify custom headers to send with each request. For example, you can add a header in your request to filter out requests from analytics on the back end by sending a specific header in the requests. You can also use custom headers to set cookies.
    • Cookies: Set cookies in the browser before the test starts. For example, to prevent a popup modal from randomly appearing and interfering with your test, you can set cookies. Any cookies that are set will apply to the domain of the starting URL of the check. Splunk Synthetics Monitoring uses the public suffix list to determine the domain.
    • Host overrides: Add host override rules to reroute requests from one host to another. For example, you can create a host override to test an existing production site against page resources loaded from a development site or a specific CDN edge node.

See the Advanced Settings for Browser Tests section of the Docs for more information.

Next, we will edit the test steps to provide more meaningful names for each step.

Last Modified Nov 8, 2024

Edit test steps

To edit the steps click on the + Edit steps or synthetic transactions button. From here, we are going to give meaningful names to each step.

For each step, we are going to give them a meaningful, readable name. That could look like:

  • Step 1 replace the text Go to URL with Go to Homepage
  • Step 2 enter the text Select Typewriter.
  • Step 3 enter Add to Cart.
  • Step 4 enter Place Order.

Editing browser test step names Editing browser test step names

Note

If you’d like, group the test steps into Transactions and edit the transaction names as seen above. This is especially useful for Single Page Apps (SPAs), where the resource waterfall is not split by URL. We can also create charts and alerts based on transactions.

Click < Return to test to return to the test configuration page and click Save to save the test.

You will be returned to the test dashboard where you will see test results start to appear.

Browser KPIs chart Browser KPIs chart

Congratulations! You have successfully created a Real Browser Test in Splunk Synthetic Monitoring. Next, we will look into a test result in more detail.

Last Modified Nov 8, 2024

View test results

1. Click into a spike or failure in your test run results.

Spike in the browser test performance KPIs chart Spike in the browser test performance KPIs chart

2. What can you learn about this test run? If it failed, use the error message, filmstrip, video replay, and waterfall to understand what happened.

Single test run result, with an error message and screenshots Single test run result, with an error message and screenshots

3. What do you see in the resources? Make sure to click through all of the page (or transaction) tabs.

resources in the browser test waterfall, with a long request highlighted resources in the browser test waterfall, with a long request highlighted

Workshop Question

Do you see anything interesting? Common issues to find and fix include: unexpected response codes, duplicate requests, forgotten third parties, large or slow files, and long gaps between requests.

Want to learn more about specific performance improvements? Google and Mozilla have great resources to help understand what goes into frontend performance as well as in-depth details of how to optimize it.

Last Modified Nov 8, 2024

Frontend Dashboards

15 minutes  

Go to Dashboards and find the End User Experiences dashboard group.

Click the three dots on the top right to open the dashboard menu, and select Save As, and include your team name and initials in the dashboard name.

Save to the dashboard group that matches your email address. Now you have your own copy of this dashboard to customize!

Dashboard save as Dashboard save as

Last Modified Oct 10, 2024

Subsections of Frontend Dashboards

Copying and editing charts

We have some good charts in our dashboard, but let’s add a few more.

  1. Go to Dashboards by clicking the dasboard icon on the left side of the screen. Find the Browser app health dashboard and scroll to the Largest Contentful Paint (LCP) chart. Click the chart actions icon to open the flyout menu, and click “Copy” to add this chart to your clipboard. copy chart copy chart

  2. Now you can continue to add any other charts to your clipboard by clicking the “add to clipboard” icon. copy chart inline icon copy chart inline icon

  3. When you have collected the charts you want on your dashboard, click the “create” icon on the top right. You might need to reload the page if you were looking at charts in another browser tab. create icon create icon

  4. Click the “Paste charts” menu option. paste charts paste charts

Now you are able to resize and edit the charts as you’d like!

Bonus: edit chart data

  1. Click the chart actions icon and select Open to edit the chart. chart actions menu chart actions menu

  2. Remove the existing Test signal. edit the test signal edit the test signal

  3. Click Add filter and type test: *yourInitials*. This will use a wildcard match so that all of the tests you have created that contain your initials (or any string you decide) will be pulled into the chart. add filter button add filter button

  4. Click into the functions to see how adding and removing dimensions changes how the data is displayed. For example, if you want all of your test location data rolled up, remove that dimension from the function. test signal functions test signal functions

  5. Change the chart name and description as appropriate, and click “Save and close” to commit your changes or just “Close” to cancel your changes. chart close buttons chart close buttons

Last Modified Apr 2, 2024

Events in context with chart data

Seeing the visualization of our KPIs is great. What’s better? KPIs in context with events! Overlaying events on a dashboard can help us more quickly understand if an event like a deployment caused a change in metrics, for better or worse.

  1. Your instructor will push a condition change to the workshop application. Click the event marker on any of your dashboard charts to see more details. condition change event condition change event

  2. In the dimensions, we can see more details about this specific event. If we click the event record, we can mark for deletion if needed. event record event record event details event details

  3. We can also see a history of events in the event feed by clicking the icon on the top right of the screen, and selecting Event feed. event feed event feed

  4. Again, we can see details about recent events in this feed. event details event details

  5. We can also add new events in the GUI or via API. To add a new event in the GUI, click the New event button. GUI add event button GUI add event button

  6. Name your event with your team name, initials, and what kind of event it is (deployment, campaign start, etc). Choose a timestamp, or leave as-is to use the current time, and click “Create”. create event form create event form

  7. Now, we need to make sure our new event is overlaid in this dashboard. Wait a minute or so (refresh the page if needed) and then search for the event in the Event overlay field. overlay event overlay event

  8. If your event is within the dashboard time window, you should now see it overlaid in your charts. Click “Save” to make sure your event overlay is saved to your dashboard! save dashboard save dashboard

Keep in mind

Want to add context to that bug ticket, or show your manager how your change improved app performance? Seeing observability data in context with events not only helps with troubleshooting, but also helps us communicate with other teams.

Last Modified Apr 2, 2024

Detectors

20 minutes  

After we have a good understanding of our performance baseline, we can start to create Detectors so that we receive alerts when our KPIs are unexpected. If we create detectors before understanding our baseline, we run the risk of generating unnecessary alert noise.

For RUM and Synthetics, we will explore how to create detectors:

  1. on a single Synthetic test
  2. on a single KPI in RUM
  3. on a dashboard chart

For more Detector resources, please see our Observability docs, Lantern, and consider an Education course if you’d like to go more in depth with instructor guidance.

Last Modified Apr 2, 2024

Subsections of Detectors

Test Detectors

Why would we want a detector on a single Synthetic test? Some examples:

  • The endpoint, API transaction, or browser journey is highly critical
  • We have deployed code changes and want to know if the resulting KPI is or is not as we expect
  • We need to temporarily keep a close eye on a specific change we are testing and don’t want to create a lot of noise, and will disable the detector later
  • We want to know about unexpected issues before a real user encounters them
  1. On the test overview page, click Create Detector on the top right. create detector on a single synthetic test create detector on a single synthetic test

  2. Name the detector with your team name and your initials and LCP (the signal we will eventually use), so that the instructor can better keep track of everyone’s progress.

  3. Change the signal to First byte time. change the signal change the signal

  4. Change the alert details, and see how the chart to the right shows the amount of alert events under those conditions. This is where you can decide how much alert noise you want to generate, based on how much your team tolerates. Play with the settings to see how they affect estimated alert noise. alert noise preview alert noise preview

  5. Now, change the signal to Largest contentful paint. This is a key web vital related to the user experience as it relates to loading time. Change the threshold to 2500ms. It’s okay if there is no sample alert event in the detector preview.

  6. Scroll down in this window to see the notification options, including severity and recipients. notification options notification options

  7. Click the notifications link to customize the alert subject, message, tip, and runbook link. notification customization dialog notification customization dialog

  8. When you are happy with the amount of alert noise this detector would generate, click Activate. activate the detector activate the detector

Last Modified Nov 8, 2024

RUM Detectors

Let’s say we want to know about an issue in production without waiting for a ticket from our support center. This is where creating detectors in RUM will be helpful for us.

  1. Go to the RUM overview of our App. Scroll to the LCP chart, click the chart menu icon, and click Create Detector. RUM LCP chart with action menu flyout RUM LCP chart with action menu flyout

  2. Rename the detector to include your team name and initials, and change the scope of the detector to App so we are not limited to a single URL or page. Change the threshold and sensitivity until there is at least one alert event in the time frame. RUM alert details RUM alert details

  3. Change the alert severity and add a recipient if you’d like, and click Activate to save the Detector.

Exercise

Now, your workshop instructor will change something on the website. How do you find out about the issue, and how do you investigate it?

Tip

Wait a few minutes, and take a look at the online store homepage in your browser. How is the experience in an incognito browser window? How is it different when you refresh the page?

Last Modified Nov 8, 2024

Chart Detectors

With our custom dashboard charts, we can create detectors focussed directly on the data and conditions we care about. In building our charts, we also built signals that can trigger alerts.

Static detectors

For many KPIs, we have a static value in mind as a threshold.

  1. In your custom End User Experience dashboard, go to the “LCP - all tests” chart.

  2. Click the bell icon on the top right of the chart, and select “New detector from chart” new detector from chart new detector from chart

  3. Change the detector name to include your team name and initials, and adjust the alert details. Change the threshold to 2500 or 4000 and see how the alert noise preview changes. alert details alert details

  4. Change the severity, and add yourself as a recipient before you save this detector. Click Activate. activate the detector activate the detector

Advanced: Dynamic detectors

Sometimes we have metrics that vary naturally, so we want to create a more dynamic detector that isn’t limited by the static threshold we decide in the moment.

  1. To create dynamic detectors on your chart, click the link to the “old” detector wizard. click the link to open this detector in the old editor click the link to open this detector in the old editor

  2. Change the detector name to include your team name and initials, and Click Create alert rule name and create alert rule name and create alert rule

  3. Confirm the signal looks correct and proceed to Alert condition. alert signal details alert signal details

  4. Select the “Sudden Change” condition and proceed to Alert settings list of dynamic conditions list of dynamic conditions

  5. Play with the settings and see how the estimated alert noise is previewed in the chart above. Tune the settings, and change the advanced settings if you’d like, before proceeding to the Alert message. alert noise marked on chart based on settings alert noise marked on chart based on settings

  6. Customize the severity, runbook URL, any tips, and message payload before proceeding to add recipients. alert message options alert message options

  7. For the sake of this workshop, only add your own email address as recipient. This is where you would add other options like webhooks, ticketing systems, and Slack channels if it’s in your real environment. recipient options recipient options

  8. Finally, confirm the detector name before clicking Activate Alert Rule activate alert rule button activate alert rule button

Last Modified Apr 2, 2024

Summary

2 minutes  

In this workshop, we learned the following:

  • How to create simple synthetic tests so that we can quickly begin to understand the availability and performance of our application
  • How to understand what RUM shows us about the end user experience, including specific user sessions
  • How to write advanced synthetic browser tests to proactively test our most important user actions
  • How to visualize our frontend performance data in context with events on dashboards
  • How to set up detectors so we don’t have to wait to hear about issues from our end users
  • How all of the above, plus Splunk and Google’s resources, helps us optimize end user experience

There is a lot more we can do with front end performance monitoring. If you have extra time, be sure to play with the charts, detectors, and do some more synthetic testing. Remember our resources such as Lantern, Splunk Docs, and experiment with apps for Mobile RUM.

This is just the beginning! If you need more time to trial Splunk Observability, or have any other questions, reach out to a Splunk Expert.

Last Modified Oct 10, 2024

Self-Service Observability

1 minute   Author Bill Grant

Splunk Observability Cloud includes powerful features that help central platform teams that are responsible for creating consistency, standards and best practices in an organization.

This workshop will go through some of the ways to apply standardization in your Observability practice.

We will cover:

  • Collecting data with standards, and applying metadata at various points in the process
  • Managing costs, by reviewing metrics and applying metrics pipeline management to it
  • Configuring Observability-as-code, using terraform and api’s

We will use a variety of scripts to demonstrate these examples. Be sure to pick a unique name, so your data won’t cross over with anyone else taking the workshop at the same time.

Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Last Modified Apr 25, 2024

Subsections of Self-Service Observability

Background

3 minutes  

Background

Let’s review a few background concepts on Open Telemetry before jumping into the details.

First we have the Open Telemetry Collector, which lives on hosts or kubernetes nodes. These collectors can collect local information (like cpu, disk, memory, etc.). They can also collect metrics from other sources like prometheus (push or pull) or databases and other middleware.

OTel Diagram OTel Diagram Source: OTel Documentation

The way the OTel Collector collects and sends data is using pipelines. Pipelines are made up of:

  • Receivers: Collect telemetry from one or more sources; they are pull- or push-based.
  • Processors: Take data from receivers and modify or transform them. Unlike receivers and exporters, processors process data in a specific order.
  • Exporters: Send data to one or more observability backends or other destinations.

OTel Diagram OTel Diagram Source: OTel Documentation

The final piece is applications which are instrumented; they will send traces (spans), metrics, and logs.

By default the instrumentation is designed to send data to the local collector (on the host or kubernetes node). This is desirable because we can then add metadata on it – like which pod or which node/host the application is running on.

Last Modified Apr 10, 2024

Collect Data with Standards

10 minutes  

Introduction

For this workshop, we’ll be doing things that only a central tools or administration would do.

The workshop uses scripts to help with steps that aren’t part of the focus of this workshop – like how to change a kubernetes app, or start an application from a host.

Tip

It can be useful to review what the scripts are doing.

So along the way it is advised to run cat <filename> from time to time to see what that step just did.

The workshop won’t call this out, so do it when you are curious.

We’ll also be running some scripts to simulate data that we want to deal with.

A simplified version of the architecture (leaving aside the specifics of kubernetes) will look something like the following:

Architecture Architecture

  • The App sends metrics and traces to the Otel Collector
  • The Otel Collector also collects metrics of its own
  • The Otel Collector adds metadata to its own metrics and data that passes through it
  • The OTel Gateway offers another opportunity to add metadata

Let’s start by deploying the gateway.

Last Modified Jul 12, 2024

Subsections of 2 Collect Data with Standards

Deploy Gateway

5 minutes  

Gateway

First we will deploy the OTel Gateway. The workshop instructor will deploy the gateway, but we will walk through the steps here if you wish to try this yourself on a second instance.

The steps:

  • Click the Data Management icon in the toolbar
  • Click the + Add integration button
  • Click Deploy the Splunk OpenTelemetry Collector button
  • Click Next
  • Select Linux
  • Change mode to Data forwarding (gateway)
  • Set the environment to prod
  • Choose the access token for this workshop
  • Click Next
  • Copy the installer script and run it in the provided linux environment.

Once our gateway is started we will notice… Nothing. The gateway, by default, doesn’t send any data. It can be configured to send data, but it doesn’t by default.

We can review the config file with:

sudo cat /etc/otel/collector/splunk-otel-collector.conf

And see that the config being used is gateway_config.yaml.

Tip

Diagrams created with OTelBin.io. Click on them to see them in detail.

DiagramWhat it Tells Us
metrics Config metrics ConfigMetrics:
The gateway will receive metrics over otlp or signalfx protocols, and then send these metrics to Splunk Observability Cloud with the signalfx protocol.

There is also a pipeline for prometheus metrics to be sent to Splunk. That pipeline is labeled internal and is meant to be for the collector. (In other words if we want to receive prometheus directly we should add it to the main pipeline.)
traces Config traces ConfigTraces:
The gateway will receive traces over jaeger, otlp, sapm, or zipkin and then send these traces to Splunk Observability Cloud with the sapm protocol.
logs Config logs ConfigLogs:
The gateway will receive logs over otlp and then send these logs to 2 places: Splunk Enterprise (Cloud) (for logs) and Splunk Observability Cloud (for profiling data).

There is also a pipeline labeled signalfx that is sending signalfx to Splunk Observability Cloud; these are events that can be used to add events to charts, as well as the process list.

We’re not going to see any host metrics, and we aren’t send any other data through the gateway yet. But we do have the internal metrics being sent in.

You can find it by creating a new chart and adding a metric:

  • Click the + in the top-right
  • Click Chart
  • For the signal of Plot A, type otelcol_process_uptime
  • Add a filter with the + to the right, and type: host.id:<name of instance>

You should get a chart like the following: Chart of gateway Chart of gateway

You can look at the Metric Finder to find other internal metrics to explore.

Add Metadata

Before we deploy a collector (agent) let’s add some metada onto metrics and traces with the gateway. That’s how we will know data is passing through it.

The attributes processor let’s us add some metadata.

sudo vi /etc/otel/collector/agent_config.yaml

Here’s what we want to add to the processors section:

processors:
  attributes/gateway_config:
    actions:
      - key: gateway
        value: oac
        action: insert

And then to the pipelines (adding attributes/gateway_config to each):

service:
  pipelines:
    traces:
      receivers: [jaeger, otlp, smartagent/signalfx-forwarder, zipkin]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      - attributes/gateway_config
      #- resource/add_environment
      exporters: [sapm, signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp, signalfx]
    metrics:
      receivers: [hostmetrics, otlp, signalfx, smartagent/signalfx-forwarder]
      processors: [memory_limiter, batch, resourcedetection, attributes/gateway_config]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp]

And finally we need to restart the gateway:

sudo systemctl restart splunk-otel-collector.service

We can make sure it is still running fine by checking the status:

sudo systemctl status splunk-otel-collector.service 

Next

Next, let’s deploy a collector and then configure it to this gateway.

Last Modified Apr 10, 2024

Deploy Collector (Agent)

10 minutes  

Collector (Agent)

Now we will deploy a collector. At first this will be configured to go directly to the back-end, but we will change the configuration and restart the collector to use the gateway.

The steps:

  • Click the Data Management icon in the toolbar
  • Click the + Add integration button
  • Click Deploy the Splunk OpenTelemetry Collector button
  • Click Next
  • Select Linux
  • Leave the mode as Host monitoring (agent)
  • Set the environment to prod
  • Leave the rest as defaults
  • Choose the access token for this workshop
  • Click Next
  • Copy the installer script and run it in the provided linux environment.

This collector is sending host metrics, so you can find it in common navigators:

  • Click the Infrastructure icon in the toolbar
  • Click the EC2 panel under Amazon Web Services
  • The AWSUniqueId is the easiest thing to find; add a filter and look for it with a wildcard (i.e. i-0ba6575181cb05226*)

Chart of agent Chart of agent

We can also simply look at the cpu.utilization metric. Create a new chart to display it, filtered on the AWSUniqueId:

Chart 2 of agent Chart 2 of agent

The reason we wanted to do that is so we can easily see the new dimension added on once we send the collector through the gateway. You can click on the Data table to see the dimensions currently being sent:

Data Table Data Table

Next

Next we’ll reconfigure the collector to send to the gateway.

Last Modified Apr 10, 2024

Reconfigure Collector

10 minutes  

Reconfigure Collector

To reconfigure the collector we need to make these changes:

  • In agent_config.yaml
    • We need to adjust the signalfx exporter to use the gateway
    • The otlp exporter is already there, so we leave it alone
    • We need to change the pipelines to use otlp
  • In splunk-otel-collector.conf
    • We need to set the SPLUNK_GATEWAY_URL to the url provided by the instructor

See this docs page for more details.

The exporters will be the following:

exporters:
  # Metrics + Events
  signalfx:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    #api_url: "${SPLUNK_API_URL}"
    #ingest_url: "${SPLUNK_INGEST_URL}"
    # Use instead when sending to gateway
    api_url: "http://${SPLUNK_GATEWAY_URL}:6060"
    ingest_url: "http://${SPLUNK_GATEWAY_URL}:9943"
    sync_host_metadata: true
    correlation:
  # Send to gateway
  otlp:
    endpoint: "${SPLUNK_GATEWAY_URL}:4317"
    tls:
      insecure: true

The others you can leave as they are but they won’t be used, as you will see in the pipelines.

The pipeline changes (you can see the items commented out and uncommented out):

service:
  pipelines:
    traces:
      receivers: [jaeger, otlp, smartagent/signalfx-forwarder, zipkin]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      #exporters: [sapm, signalfx]
      # Use instead when sending to gateway
      exporters: [otlp, signalfx]
    metrics:
      receivers: [hostmetrics, otlp, signalfx, smartagent/signalfx-forwarder]
      processors: [memory_limiter, batch, resourcedetection]
      #exporters: [signalfx]
      # Use instead when sending to gateway
      exporters: [otlp]
    metrics/internal:
      receivers: [prometheus/internal]
      processors: [memory_limiter, batch, resourcedetection]
      # When sending to gateway, at least one metrics pipeline needs
      # to use signalfx exporter so host metadata gets emitted
      exporters: [signalfx]
    logs/signalfx:
      receivers: [signalfx, smartagent/processlist]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
    logs:
      receivers: [fluentforward, otlp]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      #exporters: [splunk_hec, splunk_hec/profiling]
      # Use instead when sending to gateway
      exporters: [otlp]

And finally we can add the SPLUNK_GATEWAY_URL in splunk-otel-collector.conf, for example:

SPLUNK_GATEWAY_URL=gateway.splunk011y.com

Then we can restart the collector:

sudo systemctl restart splunk-otel-collector.service

And check the status:

sudo systemctl status splunk-otel-collector.service

And finally see the new dimension on the metrics: New Dimension New Dimension