Scenarios

  • Debug Problems in Microservices

    This scenario helps software developers to make debugging problems in microservices easier, faster, and more cost-effective for platform engineering teams rolling out standardized tooling.

  • Optimize End User Experiences

    Use Splunk Real User Monitoring (RUM) and Synthetics to get insight into end user experience, and proactively test scenarios to improve that experience.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Scenarios

Debug Problems in Microservices

2 minutes   Author Derek Mitchell

Service Maps and Traces are extremely valuable in determining what service an issue resides in. And related log data helps provide detail on why issues are occurring in that service.

But engineers sometimes need to go even deeper to debug a problem that’s occurring in one of their services.

This is where features such as Splunk’s AlwaysOn Profiling and Database Query Performance come in.

AlwaysOn Profiling continuously collects stack traces so that you can discover which lines in your code are consuming the most CPU and memory.

And Database Query Performance can quickly identify long-running, unoptimized, or heavy queries and mitigate issues they might be causing.

In this workshop, we’ll explore:

  • How to debug an application with several performance issues.
  • How to use Database Query Performance to find slow-running queries that impact application performance.
  • How to enable AlwaysOn Profiling and use it to find the code that consumes the most CPU and memory.
  • How to apply fixes based on findings from Splunk Observability Cloud and verify the result.

The workshop uses a Java-based application called The Door Game hosted in Kubernetes. Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Author Derek Mitchell   Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Debug Problems in Microservices

Build the Sample Application

10 minutes  

Introduction

For this workshop, we’ll be using a Java-based application called The Door Game. It will be hosted in Kubernetes.

Pre-requisites

You will start with an EC2 instance and perform some initial steps in order to get to the following state:

  • Deploy the Splunk distribution of the OpenTelemetry Collector
  • Deploy the MySQL database container and populate data
  • Build and deploy the doorgame application container

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance.

You’ll be asked to enter a name for your environment. Please use profiling-workshop-yourname (where yourname is replaced by your actual name).

cd workshop/profiling
./1-deploy-otel-collector.sh
./2-deploy-mysql.sh
./3-deploy-doorgame.sh

Let’s Play The Door Game

Now that the application is deployed, let’s play with it and generate some observability data.

Get the external IP address for your application instance using the following command:

kubectl describe svc doorgame | grep "LoadBalancer Ingress"

The output should look like the following:

LoadBalancer Ingress:     52.23.184.60

You should be able to access The Door Game application by pointing your browser to port 81 of the provided IP address. For example:

http://52.23.184.60:81

You should be met with The Door Game intro screen:

Door Game Welcome Screen Door Game Welcome Screen

Click Let's Play to start the game:

Let’s Play Let’s Play

Did you notice that it took a long time after clicking Let's Play before we could actually start playing the game?

Let’s use Splunk Observability Cloud to determine why the application startup is so slow.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Troubleshoot Game Startup

10 minutes  

Let’s use Splunk Observability Cloud to determine why the game started so slowly.

View your application in Splunk Observability Cloud

Note: when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. profiling-workshop-name).

If everything was deployed correctly, you should see doorgame displayed in the list of services:

APM Overview APM Overview

Click on Explore on the right-hand side to view the service map. We should the doorgame application on the service map:

Service Map Service Map

Notice how the majority of the time is being spent in the MySQL database. We can get more details by clicking on Database Query Performance on the right-hand side.

Database Query Performance Database Query Performance

This view shows the SQL queries that took the most amount of time. Ensure that the Compare to dropdown is set to None, so we can focus on current performance.

We can see that one query in particular is taking a long time:

select * from doorgamedb.users, doorgamedb.organizations

(do you notice anything unusual about this query?)

Let’s troubleshoot further by clicking on one of the spikes in the latency graph. This brings up a list of example traces that include this slow query:

Traces with Slow Query Traces with Slow Query

Click on one of the traces to see the details:

Trace with Slow Query Trace with Slow Query

In the trace, we can see that the DoorGame.startNew operation took 25.8 seconds, and 17.6 seconds of this was associated with the slow SQL query we found earlier.

What did we accomplish?

To recap what we’ve done so far:

  • We’ve deployed our application and are able to access it successfully.
  • The application is sending traces to Splunk Observability Cloud successfully.
  • We started troubleshooting the slow application startup time, and found a slow SQL query that seems to be the root cause.

To troubleshoot further, it would be helpful to get deeper diagnostic data that tells us what’s happening inside our JVM, from both a memory (i.e. JVM heap) and CPU perspective. We’ll tackle that in the next section of the workshop.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Enable AlwaysOn Profiling

20 minutes  

Let’s learn how to enable the memory and CPU profilers, verify their operation, and use the results in Splunk Observability Cloud to find out why our application startup is slow.

Update the application configuration

We will need to pass additional configuration arguments to the Splunk OpenTelemetry Java agent in order to enable both profilers. The configuration is documented here in detail, but for now we just need the following settings:

SPLUNK_PROFILER_ENABLED="true"
SPLUNK_PROFILER_MEMORY_ENABLED="true"

Since our application is deployed in Kubernetes, we can update the Kubernetes manifest file to set these environment variables. Open the doorgame/doorgame.yaml file for editing, and ensure the values of the following environment variables are set to “true”:

- name: SPLUNK_PROFILER_ENABLED
  value: "true"
- name: SPLUNK_PROFILER_MEMORY_ENABLED
  value: "true"

Next, let’s redeploy the Door Game application by running the following command:

cd workshop/profiling
./5-redeploy-doorgame.sh

After a few minutes, a new pod will be deployed with the updated application settings.

Confirm operation

To ensure the profiler is enabled, let’s review the application logs with the following commands:

kubectl logs -l app=doorgame --tail=100 | grep JfrActivator

You should see a line in the application log output that shows the profiler is active:

[otel.javaagent 2024-02-05 19:01:12:416 +0000] [main] INFO com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active.```

This confirms that the profiler is enabled and sending data to the OpenTelemetry collector deployed in our Kubernetes cluster, which in turn sends profiling data to Splunk Observability Cloud.

Profiling in APM

Visit http://<your IP address>:81 and play a few more rounds of The Door Game.

Then head back to Splunk Observability Cloud, click on APM, and click on the doorgame service at the bottom of the screen.

Click on “Traces” on the right-hand side to load traces for this service. Filter on traces involving the doorgame service and the GET new-game operation (since we’re troubleshooting the game startup sequence):

New Game Traces New Game Traces

Selecting one of these traces brings up the following screen:

Trace with Call Stacks Trace with Call Stacks

You can see that the spans now include “Call Stacks”, which is a result of us enabling CPU and memory profiling earlier.

Click on the span named doorgame: SELECT doorgamedb, then click on CPU stack traces on the right-hand side:

Trace with CPU Call Stacks Trace with CPU Call Stacks

This brings up the CPU call stacks captured by the profiler.

Let’s open the AlwaysOn Profiler to review the CPU stack trace in more detail. We can do this by clicking on the Span link beside View in AlwaysOn Profiler:

Flamegraph and table Flamegraph and table

The AlwaysOn Profiler includes both a table and a flamegraph. Take some time to explore this view by doing some of the following:

  • click a table item and notice the change in flamegraph
  • navigate the flamegraph by clicking on a stack frame to zoom in, and a parent frame to zoom out
  • add a search term like splunk or jetty to highlight some matching stack frames

Let’s have a closer look at the stack trace, starting with the DoorGame.startNew method (since we already know that it’s the slowest part of the request)

com.splunk.profiling.workshop.DoorGame.startNew(DoorGame.java:24)
com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)
com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1168)
com.mysql.cj.NativeSession.execSQL(NativeSession.java:655)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:998)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:1065)
com.mysql.cj.protocol.a.NativeProtocol.readAllResults(NativeProtocol.java:1715)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1661)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:48)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:87)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1648)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:42)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:75)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:44)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:66)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:41)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:62)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:45)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:102)
com.mysql.cj.protocol.a.SimplePacketReader.readMessageLocal(SimplePacketReader.java:137)
com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:64)
java.io.FilterInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl$AppInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl.readApplicationRecord(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.lang.ThreadLocal.get(Unknown Source:0)

We can interpret the stack trace as follows:

  • When starting a new Door Game, a call is made to load user data.
  • This results in executing a SQL query to load the user data (which is related to the slow SQL query we saw earlier).
  • We then see calls to read data in from the database.

So, what does this all mean? It means that our application startup is slow since it’s spending time loading user data. In fact, the profiler has told us the exact line of code where this happens:

com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)

Let’s open the corresponding source file (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and look at this code in more detail:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

Here we can see the application logic in action. It establishes a connection to the database, then executes the SQL query we saw earlier:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

It then loops through each of the results, and loads each user into a HashMap object, which is a collection of User objects.

We have a good understanding of why the game startup sequence is so slow, but how do we fix it?

For more clues, let’s have a look at the other part of AlwaysOn Profiling: memory profiling. To do this, click on the Memory tab in AlwaysOn profiling:

Memory Profiling Memory Profiling

At the top of this view, we can see how much heap memory our application is using, the heap memory allocation rate, and garbage collection activity.

We can see that our application is using about 400 MB out of the max 1 GB heap size, which seems excessive for such a simple application. We can also see that some garbage collection occurred, which caused our application to pause (and probably annoyed those wanting to play the Game Door).

At the bottom of the screen, which can see which methods in our Java application code are associated with the most heap memory usage. Click on the first item in the list to show the Memory Allocation Stack Traces associated with the java.util.Arrays.copyOf method specifically:

Memory Allocation Stack Traces Memory Allocation Stack Traces

With help from the profiler, we can see that the loadUserData method not only consumes excessive CPU time, but it also consumes excessive memory when storing the user data in the HashMap collection object.

What did we accomplish?

We’ve come a long way already!

  • We learned how to enable the profiler in the Splunk OpenTelemetry Java instrumentation agent.
  • We learned how to verify in the agent output that the profiler is enabled.
  • We have explored several profiling related workflows in APM:
    • How to navigate to AlwaysOn Profiling from the troubleshooting view
    • How to explore the flamegraph and method call duration table through navigation and filtering
    • How to identify when a span has sampled call stacks associated with it
    • How to explore heap utilization and garbage collection activity
    • How to view memory allocation stack traces for a particular method

In the next section, we’ll apply a fix to our application to resolve the slow startup performance.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Debug Problems in Microservices

  • Tagging Workshop

    This workshop shows how tags can be used to reduce the time required for SREs to isolate issues across services, so they know which team to engage to troubleshoot the issue further, and can provide context to help engineering get a head start on debugging.

  • Profiling Workshop

    This workshop shows how Database Query Performance and AlwaysOn Profiling can be used to reduce the time required for engineers to debug problems in microservices.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Debug Problems in Microservices

Tagging Workshop

2 minutes   Author Derek Mitchell

Splunk Observability Cloud includes powerful features that dramatically reduce the time required for SREs to isolate issues across services, so they know which team to engage to troubleshoot the issue further, and can provide context to help engineering get a head start on debugging.

Unlocking these features requires tags to be included with your application traces. But how do you know which tags are the most valuable and how do you capture them?

In this workshop, we’ll explore:

  • What are tags and why are they such a critical part of making an application observable.
  • How to use OpenTelemetry to capture tags of interest from your application.
  • How to index tags in Splunk Observability Cloud and the differences between Troubleshooting MetricSets and Monitoring MetricSets.
  • How to utilize tags in Splunk Observability Cloud to find “unknown unknowns” using the Tag Spotlight and Dynamic Service Map features.
  • How to utilize tags for alerting and dashboards.

The workshop uses a simple microservices-based application to illustrate these concepts. Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Author Derek Mitchell   Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Tagging Workshop

Build the Sample Application

10 minutes  

Introduction

For this workshop, we’ll be using a microservices-based application. This application is for an online retailer and normally includes more than a dozen services. However, to keep the workshop simple, we’ll be focusing on two services used by the retailer as part of their payment processing workflow: the credit check service and the credit processor service.

Pre-requisites

You will start with an EC2 instance and perform some initial steps in order to get to the following state:

  • Deploy the Splunk distribution of the OpenTelemetry Collector
  • Build and deploy creditcheckservice and creditprocessorservice
  • Deploy a load generator to send traffic to the services

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance:

cd workshop/tagging
./0-deploy-collector-with-services.sh
Java

There are implementations in multiple languages available for creditcheckservice. Run

./0-deploy-collector-with-services.sh java

to pick Java over Python.

View your application in Splunk Observability Cloud

Now that the setup is complete, let’s confirm that it’s sending data to Splunk Observability Cloud. Note that when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. tagging-workshop-instancename).

If everything was deployed correctly, you should see creditprocessorservice and creditcheckservice displayed in the list of services:

APM Overview APM Overview

Click on Explore on the right-hand side to view the service map. We can see that the creditcheckservice makes calls to the creditprocessorservice, with an average response time of at least 3 seconds:

Service Map Service Map

Next, click on Traces on the right-hand side to see the traces captured for this application. You’ll see that some traces run relatively fast (i.e. just a few milliseconds), whereas others take a few seconds.

Traces Traces

If you toggle Errors only to on, you’ll also notice that some traces have errors:

Traces Traces

Toggle Errors only back to off and sort the traces by duration, then click on one of the longer running traces. In this example, the trace took five seconds, and we can see that most of the time was spent calling the /runCreditCheck operation, which is part of the creditprocessorservice.

Long Running Trace Long Running Trace

Currently, we don’t have enough details in our traces to understand why some requests finish in a few milliseconds, and others take several seconds. To provide the best possible customer experience, this will be critical for us to understand.

We also don’t have enough information to understand why some requests result in errors, and others don’t. For example, if we look at one of the error traces, we can see that the error occurs when the creditprocessorservice attempts to call another service named otherservice. But why do some requests results in a call to otherservice, and others don’t?

Trace with Errors Trace with Errors

We’ll explore these questions and more in the workshop.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

What are Tags?

3 minutes  

To understand why some requests have errors or slow performance, we’ll need to add context to our traces. We’ll do this by adding tags. But first, let’s take a moment to discuss what tags are, and why they’re so important for observability.

What are tags?

Tags are key-value pairs that provide additional metadata about spans in a trace, allowing you to enrich the context of the spans you send to Splunk APM.

For example, a payment processing application would find it helpful to track:

  • The payment type used (i.e. credit card, gift card, etc.)
  • The ID of the customer that requested the payment

This way, if errors or performance issues occur while processing the payment, we have the context we need for troubleshooting.

While some tags can be added with the OpenTelemetry collector, the ones we’ll be working with in this workshop are more granular, and are added by application developers using the OpenTelemetry API.

Attributes vs. Tags

A note about terminology before we proceed. While this workshop is about tags, and this is the terminology we use in Splunk Observability Cloud, OpenTelemetry uses the term attributes instead. So when you see tags mentioned throughout this workshop, you can treat them as synonymous with attributes.

Why are tags so important?

Tags are essential for an application to be truly observable. As we saw with our credit check service, some users are having a great experience: fast with no errors. But other users get a slow experience or encounter errors.

Tags add the context to the traces to help us understand why some users get a great experience and others don’t. And powerful features in Splunk Observability Cloud utilize tags to help you jump quickly to root cause.

Sneak Peak: Tag Spotlight

Tag Spotlight uses tags to discover trends that contribute to high latency or error rates:

Tag Spotlight Preview Tag Spotlight Preview

The screenshot above provides an example of Tag Spotlight from another application.

Splunk has analyzed all of the tags included as part of traces that involve the payment service.

It tells us very quickly whether some tag values have more errors than others.

If we look at the version tag, we can see that version 350.10 of the service has a 100% error rate, whereas version 350.9 of the service has no errors at all:

Tag Spotlight Preview Tag Spotlight Preview

We’ll be using Tag Spotlight with the credit check service later on in the workshop, once we’ve captured some tags of our own.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Capture Tags with OpenTelemetry

15 minutes  

Please proceed to one of the subsections for Java or Python. Ask your instructor for the one used during the workshop!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Capture Tags with OpenTelemetry

1. Capture Tags - Java

15 minutes  

Let’s add some tags to our traces, so we can find out why some customers receive a poor experience from our application.

Identify Useful Tags

We’ll start by reviewing the code for the creditCheck function of creditcheckservice (which can be found in the file /home/splunk/workshop/tagging/creditcheckservice-java/src/main/java/com/example/creditcheckservice/CreditCheckController.java):

@GetMapping("/check")
public ResponseEntity<String> creditCheck(@RequestParam("customernum") String customerNum) {
    // Get Credit Score
    int creditScore;
    try {
        String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
        creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
    }

    String creditScoreCategory = getCreditCategoryFromScore(creditScore);

    // Run Credit Check
    String creditCheckUrl = "http://creditprocessorservice:8899/runCreditCheck?customernum=" + customerNum + "&score=" + creditScore;
    String checkResult;
    try {
        checkResult = restTemplate.getForObject(creditCheckUrl, String.class);
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error running credit check");
    }

    return ResponseEntity.ok(checkResult);
}

We can see that this function accepts a customer number as an input. This would be helpful to capture as part of a trace. What else would be helpful?

Well, the credit score returned for this customer by the creditprocessorservice may be interesting (we want to ensure we don’t capture any PII data though). It would also be helpful to capture the credit score category, and the credit check result.

Great, we’ve identified four tags to capture from this service that could help with our investigation. But how do we capture these?

Capture Tags

We start by adding OpenTelemetry imports to the top of the CreditCheckController.java file:

...
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.instrumentation.annotations.WithSpan;
import io.opentelemetry.instrumentation.annotations.SpanAttribute;

Next, we use the @WithSpan annotation to produce a span for creditCheck:

@GetMapping("/check")
@WithSpan // ADDED
public ResponseEntity<String> creditCheck(@RequestParam("customernum") String customerNum) {
    ...

We can now get a reference to the current span and add an attribute (aka tag) to it:

...
try {
    String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
    creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
} catch (HttpClientErrorException e) {
    return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
}
Span currentSpan = Span.current(); // ADDED
currentSpan.setAttribute("credit.score", creditScore); // ADDED
...

That was pretty easy, right? Let’s capture some more, with the final result looking like this:

@GetMapping("/check")
@WithSpan(kind=SpanKind.SERVER)
public ResponseEntity<String> creditCheck(@RequestParam("customernum")
                                          @SpanAttribute("customer.num")
                                          String customerNum) {
    // Get Credit Score
    int creditScore;
    try {
        String creditScoreUrl = "http://creditprocessorservice:8899/getScore?customernum=" + customerNum;
        creditScore = Integer.parseInt(restTemplate.getForObject(creditScoreUrl, String.class));
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error getting credit score");
    }
    Span currentSpan = Span.current();
    currentSpan.setAttribute("credit.score", creditScore);

    String creditScoreCategory = getCreditCategoryFromScore(creditScore);
    currentSpan.setAttribute("credit.score.category", creditScoreCategory);

    // Run Credit Check
    String creditCheckUrl = "http://creditprocessorservice:8899/runCreditCheck?customernum=" + customerNum + "&score=" + creditScore;
    String checkResult;
    try {
        checkResult = restTemplate.getForObject(creditCheckUrl, String.class);
    } catch (HttpClientErrorException e) {
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error running credit check");
    }
    currentSpan.setAttribute("credit.check.result", checkResult);

    return ResponseEntity.ok(checkResult);
}

Redeploy Service

Once these changes are made, let’s run the following script to rebuild the Docker image used for creditcheckservice and redeploy it to our Kubernetes cluster:

./5-redeploy-creditcheckservice.sh java

Confirm Tag is Captured Successfully

After a few minutes, return to Splunk Observability Cloud and load one of the latest traces to confirm that the tags were captured successfully (hint: sort by the timestamp to find the latest traces):

Trace with Attributes Trace with Attributes

Well done, you’ve leveled up your OpenTelemetry game and have added context to traces using tags.

Next, we’re ready to see how you can use these tags with Splunk Observability Cloud!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

2. Capture Tags - Python

15 minutes  

Let’s add some tags to our traces, so we can find out why some customers receive a poor experience from our application.

Identify Useful Tags

We’ll start by reviewing the code for the credit_check function of creditcheckservice (which can be found in the /home/splunk/workshop/tagging/creditcheckservice-py/main.py file):

@app.route('/check')
def credit_check():
    customerNum = request.args.get('customernum')

    # Get Credit Score
    creditScoreReq = requests.get("http://creditprocessorservice:8899/getScore?customernum=" + customerNum)
    creditScoreReq.raise_for_status()
    creditScore = int(creditScoreReq.text)

    creditScoreCategory = getCreditCategoryFromScore(creditScore)

    # Run Credit Check
    creditCheckReq = requests.get("http://creditprocessorservice:8899/runCreditCheck?customernum=" + str(customerNum) + "&score=" + str(creditScore))
    creditCheckReq.raise_for_status()
    checkResult = str(creditCheckReq.text)

    return checkResult

We can see that this function accepts a customer number as an input. This would be helpful to capture as part of a trace. What else would be helpful?

Well, the credit score returned for this customer by the creditprocessorservice may be interesting (we want to ensure we don’t capture any PII data though). It would also be helpful to capture the credit score category, and the credit check result.

Great, we’ve identified four tags to capture from this service that could help with our investigation. But how do we capture these?

Capture Tags

We start by adding importing the trace module by adding an import statement to the top of the creditcheckservice-py/main.py file:

import requests
from flask import Flask, request
from waitress import serve
from opentelemetry import trace  # <--- ADDED BY WORKSHOP
...

Next, we need to get a reference to the current span so we can add an attribute (aka tag) to it:

def credit_check():
    current_span = trace.get_current_span()  # <--- ADDED BY WORKSHOP
    customerNum = request.args.get('customernum')
    current_span.set_attribute("customer.num", customerNum)  # <--- ADDED BY WORKSHOP
...

That was pretty easy, right? Let’s capture some more, with the final result looking like this:

def credit_check():
    current_span = trace.get_current_span()  # <--- ADDED BY WORKSHOP
    customerNum = request.args.get('customernum')
    current_span.set_attribute("customer.num", customerNum)  # <--- ADDED BY WORKSHOP

    # Get Credit Score
    creditScoreReq = requests.get("http://creditprocessorservice:8899/getScore?customernum=" + customerNum)
    creditScoreReq.raise_for_status()
    creditScore = int(creditScoreReq.text)
    current_span.set_attribute("credit.score", creditScore)  # <--- ADDED BY WORKSHOP

    creditScoreCategory = getCreditCategoryFromScore(creditScore)
    current_span.set_attribute("credit.score.category", creditScoreCategory)  # <--- ADDED BY WORKSHOP

    # Run Credit Check
    creditCheckReq = requests.get("http://creditprocessorservice:8899/runCreditCheck?customernum=" + str(customerNum) + "&score=" + str(creditScore))
    creditCheckReq.raise_for_status()
    checkResult = str(creditCheckReq.text)
    current_span.set_attribute("credit.check.result", checkResult)  # <--- ADDED BY WORKSHOP

    return checkResult

Redeploy Service

Once these changes are made, let’s run the following script to rebuild the Docker image used for creditcheckservice and redeploy it to our Kubernetes cluster:

./5-redeploy-creditcheckservice.sh

Confirm Tag is Captured Successfully

After a few minutes, return to Splunk Observability Cloud and load one of the latest traces to confirm that the tags were captured successfully (hint: sort by the timestamp to find the latest traces):

Trace with Attributes Trace with Attributes

Well done, you’ve leveled up your OpenTelemetry game and have added context to traces using tags.

Next, we’re ready to see how you can use these tags with Splunk Observability Cloud!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Explore Trace Data

5 minutes  

Now that we’ve captured several tags from our application, let’s explore some of the trace data we’ve captured that include this additional context, and see if we can identify what’s causing a poor user experience in some cases.

Use Trace Analyzer

Navigate to APM, then select Traces. This takes us to the Trace Analyzer, where we can add filters to search for traces of interest. For example, we can filter on traces where the credit score starts with 7:

Credit Check Starts with Seven Credit Check Starts with Seven

If you load one of these traces, you’ll see that the credit score indeed starts with seven.

We can apply similar filters for the customer number, credit score category, and credit score result.

Explore Traces With Errors

Let’s remove the credit score filter and toggle Errors only to on, which results in a list of only those traces where an error occurred:

Traces with Errors Only Traces with Errors Only

Click on a few of these traces, and look at the tags we captured. Do you notice any patterns?

Next, toggle Errors only to off, and sort traces by duration. Look at a few of the slowest running traces, and compare them to the fastest running traces. Do you notice any patterns?

If you found a pattern that explains the slow performance and errors - great job! But keep in mind that this is a difficult way to troubleshoot, as it requires you to look through many traces and mentally keep track of what you saw, so you can identify a pattern.

Thankfully, Splunk Observability Cloud provides a more efficient way to do this, which we’ll explore next.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Index Tags

5 minutes  

Index Tags

To use advanced features in Splunk Observability Cloud such as Tag Spotlight, we’ll need to first index one or more tags.

To do this, navigate to Settings -> APM MetricSets. Then click the + New MetricSet button.

Let’s index the credit.score.category tag by entering the following details (note: since everyone in the workshop is using the same organization, the instructor will do this step on your behalf):

Create Troubleshooting MetricSet Create Troubleshooting MetricSet

Click Start Analysis to proceed.

The tag will appear in the list of Pending MetricSets while analysis is performed.

Pending MetricSets Pending MetricSets

Once analysis is complete, click on the checkmark in the Actions column.

MetricSet Configuraiton Applied MetricSet Configuraiton Applied

How to choose tags for indexing

Why did we choose to index the credit.score.category tag and not the others?

To understand this, let’s review the primary use cases for tags:

  • Filtering
  • Grouping

Filtering

With the filtering use case, we can use the Trace Analyzer capability of Splunk Observability Cloud to filter on traces that match a particular tag value.

We saw an example of this earlier, when we filtered on traces where the credit score started with 7.

Or if a customer calls in to complain about slow service, we could use Trace Analyzer to locate all traces with that particular customer number.

Tags used for filtering use cases are generally high-cardinality, meaning that there could be thousands or even hundreds of thousands of unique values. In fact, Splunk Observability Cloud can handle an effectively infinite number of unique tag values! Filtering using these tags allows us to rapidly locate the traces of interest.

Note that we aren’t required to index tags to use them for filtering with Trace Analyzer.

Grouping

With the grouping use case, we can use Trace Analyzer to group traces by a particular tag.

But we can also go beyond this and surface trends for tags that we collect using the powerful Tag Spotlight feature in Splunk Observability Cloud, which we’ll see in action shortly.

Tags used for grouping use cases should be low to medium-cardinality, with hundreds of unique values.

For custom tags to be used with Tag Spotlight, they first need to be indexed.

We decided to index the credit.score.category tag because it has a few distinct values that would be useful for grouping. In contrast, the customer number and credit score tags have hundreds or thousands of unique values, and are more valuable for filtering use cases rather than grouping.

Troubleshooting vs. Monitoring MetricSets

You may have noticed that, to index this tag, we created something called a Troubleshooting MetricSet. It’s named this way because a Troubleshooting MetricSet, or TMS, allows us to troubleshoot issues with this tag using features such as Tag Spotlight.

You may have also noticed that there’s another option which we didn’t choose called a Monitoring MetricSet (or MMS). Monitoring MetricSets go beyond troubleshooting and allow us to use tags for alerting and dashboards. We’ll explore this concept later in the workshop.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Use Tags for Troubleshooting

5 minutes  

Using Tag Spotlight

Now that we’ve indexed the credit.score.category tag, we can use it with Tag Spotlight to troubleshoot our application.

Navigate to APM then click on Tag Spotlight on the right-hand side. Ensure the creditcheckservice service is selected from the Service drop-down (if not already selected).

With Tag Spotlight, we can see 100% of credit score requests that result in a score of impossible have an error, yet requests for all other credit score types have no errors at all!

Tag Spotlight with Errors Tag Spotlight with Errors

This illustrates the power of Tag Spotlight! Finding this pattern would be time-consuming without it, as we’d have to manually look through hundreds of traces to identify the pattern (and even then, there’s no guarantee we’d find it).

We’ve looked at errors, but what about latency? Let’s click on Latency near the top of the screen to find out.

Here, we can see that the requests with a poor credit score request are running slowly, with P50, P90, and P99 times of around 3 seconds, which is too long for our users to wait, and much slower than other requests.

We can also see that some requests with an exceptional credit score request are running slowly, with P99 times of around 5 seconds, though the P50 response time is relatively quick.

Tag Spotlight with Latency Tag Spotlight with Latency

Using Dynamic Service Maps

Now that we know the credit score category associated with the request can impact performance and error rates, let’s explore another feature that utilizes indexed tags: Dynamic Service Maps.

With Dynamic Service Maps, we can breakdown a particular service by a tag. For example, let’s click on APM, then click Explore to view the service map.

Click on creditcheckservice. Then, on the right-hand menu, click on the drop-down that says Breakdown, and select the credit.score.category tag.

At this point, the service map is updated dynamically, and we can see the performance of requests hitting creditcheckservice broken down by the credit score category:

Service Map Breakdown Service Map Breakdown

This view makes it clear that performance for good and fair credit scores is excellent, while poor and exceptional scores are much slower, and impossible scores result in errors.

Summary

Tag Spotlight has uncovered several interesting patterns for the engineers that own this service to explore further:

  • Why are all the impossible credit score requests resulting in error?
  • Why are all the poor credit score requests running slowly?
  • Why do some of the exceptional requests run slowly?

As an SRE, passing this context to the engineering team would be extremely helpful for their investigation, as it would allow them to track down the issue much more quickly than if we simply told them that the service was “sometimes slow”.

If you’re curious, have a look at the source code for the creditprocessorservice. You’ll see that requests with impossible, poor, and exceptional credit scores are handled differently, thus resulting in the differences in error rates and latency that we uncovered.

The behavior we saw with our application is typical for modern cloud-native applications, where different inputs passed to a service lead to different code paths, some of which result in slower performance or errors. For example, in a real credit check service, requests resulting in low credit scores may be sent to another downstream service to further evaluate risk, and may perform more slowly than requests resulting in higher scores, or encounter higher error rates.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Use Tags for Monitoring

15 minutes  

Earlier, we created a Troubleshooting Metric Set on the credit.score.category tag, which allowed us to use Tag Spotlight with that tag and identify a pattern to explain why some users received a poor experience.

In this section of the workshop, we’ll explore a related concept: Monitoring MetricSets.

What are Monitoring MetricSets?

Monitoring MetricSets go beyond troubleshooting and allow us to use tags for alerting, dashboards and SLOs.

Create a Monitoring MetricSet

(note: your workshop instructor will do the following for you, but observe the steps)

Let’s navigate to Settings -> APM MetricSets, and click the edit button (i.e. the little pencil) beside the MetricSet for credit.score.category.

edit APM MetricSet edit APM MetricSet

Check the box beside Also create Monitoring MetricSet then click Start Analysis

Monitoring MetricSet Monitoring MetricSet

The credit.score.category tag appears again as a Pending MetricSet. After a few moments, a checkmark should appear. Click this checkmark to enable the Pending MetricSet.

pending APM MetricSet pending APM MetricSet

Using Monitoring MetricSets

This mechanism creates a new dimension from the tag on a bunch of metrics that can be used to filter these metrics based on the values of that new dimension. Important: To differentiate between the original and the copy, the dots in the tag name are replaced by underscores for the new dimension. With that the metrics become a dimension named credit_score_category and not credit.score.category.

Next, let’s explore how we can use this Monitoring MetricSet.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Use Tags for Monitoring

Use Tags with Dashboards

5 minutes  

Dashboards

Navigate to Metric Finder, then type in the name of the tag, which is credit_score_category (remember that the dots in the tag name were replaced by underscores when the Monitoring MetricSet was created). You’ll see that multiple metrics include this tag as a dimension:

Metric Finder Metric Finder

By default, Splunk Observability Cloud calculates several metrics using the trace data it receives. See Learn about MetricSets in APM for more details.

By creating an MMS, credit_score_category was added as a dimension to these metrics, which means that this dimension can now be used for alerting and dashboards.

To see how, let’s click on the metric named service.request.duration.ns.p99, which brings up the following chart:

Service Request Duration Service Request Duration

Add filters for sf_environment, sf_service, and sf_dimensionalized. Then set the Extrapolation policy to Last value and the Display units to Nanosecond:

Chart with Seconds Chart with Seconds

With these settings, the chart allows us to visualize the service request duration by credit score category:

Duration by Credit Score Duration by Credit Score

Now we can see the duration by credit score category. In my example, the red line represents the exceptional category, and we can see that the duration for these requests sometimes goes all the way up to 5 seconds.

The orange represents the very good category, and has very fast response times.

The green line represents the poor category, and has response times between 2-3 seconds.

It may be useful to save this chart on a dashboard for future reference. To do this, click on the Save as… button and provide a name for the chart:

Save Chart As Save Chart As

When asked which dashboard to save the chart to, let’s create a new one named Credit Check Service - Your Name (substituting your actual name):

Save Chart As Save Chart As

Now we can see the chart on our dashboard, and can add more charts as needed to monitor our credit check service:

Credit Check Service Dashboard Credit Check Service Dashboard

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Use Tags with Alerting

3 minutes  

Alerts

It’s great that we have a dashboard to monitor the response times of the credit check service by credit score, but we don’t want to stare at a dashboard all day.

Let’s create an alert so we can be notified proactively if customers with exceptional credit scores encounter slow requests.

To create this alert, click on the little bell on the top right-hand corner of the chart, then select New detector from chart:

New Detector From Chart New Detector From Chart

Let’s call the detector Latency by Credit Score Category. Set the environment to your environment name (i.e. tagging-workshop-yourname) then select creditcheckservice as the service. Since we only want to look at performance for customers with exceptional credit scores, add a filter using the credit_score_category dimension and select exceptional:

Create New Detector Create New Detector

As an alert condition instead of “Static threshold” we want to select “Sudden Change” to make the example more vivid.

Alert Condition: Sudden Change Alert Condition: Sudden Change

We can then set the remainder of the alert details as we normally would. The key thing to remember here is that without capturing a tag with the credit score category and indexing it, we wouldn’t be able to alert at this granular level, but would instead be forced to bucket all customers together, regardless of their importance to the business.

Unless you want to get notified, we do not need to finish this wizard. You can just close the wizard by clicking the X on the top right corner of the wizard pop-up.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Use Tags with Service Level Objectives

10 minutes  

We can now use the created Monitoring MetricSet together with Service Level Objectives a similar way we used them with dashboards and detectors/alerts before. For that we want to be clear about some key concepts:

Key Concepts of Service Level Monitoring

(skip if you know this)

ConceptDefinitionExamples
Service level indicator (SLI)An SLI is a quantitative measurement showing some health of a service, expressed as a metric or combination of metrics.Availability SLI: Proportion of requests that resulted in a successful response
Performance SLI: Proportion of requests that loaded in < 100 ms
Service level objective (SLO)An SLO defines a target for an SLI and a compliance period over which that target must be met. An SLO contains 3 elements: an SLI, a target, and a compliance period. Compliance periods can be calendar, such as monthly, or rolling, such as past 30 days.Availability SLI over a calendar period: Our service must respond successfully to 95% of requests in a month
Performance SLI over a rolling period: Our service must respond to 99% of requests in < 100 ms over a 7-day period
Service level agreement (SLA)An SLA is a contractual agreement that indicates service levels your users can expect from your organization. If an SLA is not met, there can be financial consequences.A customer service SLA indicates that 90% of support requests received on a normal support day must have a response within 6 hours.
Error budgetA measurement of how your SLI performs relative to your SLO over a period of time. Error budget measures the difference between actual and desired performance. It determines how unreliable your service might be during this period and serves as a signal when you need to take corrective action.Our service can respond to 1% of requests in >100 ms over a 7 day period.
Burn rateA unitless measurement of how quickly a service consumes the error budget during the compliance window of the SLO. Burn rate makes the SLO and error budget actionable, showing service owners when a current incident is serious enough to page an on-call responder.For an SLO with a 30-day compliance window, a constant burn rate of 1 means your error budget is used up in exactly 30 days.

Creating a new Service Level Objective

There is an easy to follow wizard to create a new Service Level Objective (SLO). In the left navigation just follow the link “Detectors & SLOs”. From there select the third tab “SLOs” and click the blue button to the right that says “Create SLO”.

Create new SLO Create new SLO

The wizard guides you through some easy steps. And if everything during the previous steps worked out, you will have no problems here. ;)

In our case we want to use Service & endpoint as our Metric type instead of Custom metric. We filter the Environment down to the environment that we are using during this workshop (i.e. tagging-workshop-yourname) and select the creditcheckservice from the Service and endpoint list. Our Indicator type for this workshop will be Request latency and not Request success.

Now we can select our Filters. Since we are using the Request latency as the Indicator type and that is a metric of the APM Service, we can filter on credit.score.category. Feel free to try out what happens, when you set the Indicator type to Request success.

Today we are only interested in our exceptional credit scores. So please select that as the filter.

Choose Service or Metric for SLO Choose Service or Metric for SLO

In the next step we define the objective we want to reach. For the Request latency type, we define the Target (%), the Latency (ms) and the Compliance Window. Please set these to 99, 100 and Last 7 days. This will give us a good idea what we are achieving already.

Here we will already be in shock or play around with the numbers to make it not so scary. Feel free to play around with the numbers to see how well we achieve the objective and how much we have left to burn.

Define Objective for SLO Define Objective for SLO

The third step gives us the chance to alert (aka annoy) people who should be aware about these SLOs to initiate countermeasures. These “people” can also be mechanism like ITSM systems or webhooks to initiate automatic remediation steps.

Activate all categories you want to alert on and add recipients to the different alerts.

Define Alerting for SLO Define Alerting for SLO

The next step is only the naming for this SLO. Have your own naming convention ready for this. In our case we would just name it creditchceckservice:score:exceptional:YOURNAME and click the Create-button BUT you can also just cancel the wizard by clicking anything in the left navigation and confirming to Discard changes.

Name and Save the SLO Name and Save the SLO

And with that we have (nearly) successfully created an SLO including the alerting in case we might miss or goals.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Summary

2 minutes  

In this workshop, we learned the following:

  • What are tags and why are they such a critical part of making an application observable?
  • How to use OpenTelemetry to capture tags of interest from your application.
  • How to index tags in Splunk Observability Cloud and the differences between Troubleshooting MetricSets and Monitoring MetricSets.
  • How to utilize tags in Splunk Observability Cloud to find “unknown unknowns” using the Tag Spotlight and Dynamic Service Map features.
  • How to utilize tags for dashboards, alerting and service level objectives.

Collecting tags aligned with the best practices shared in this workshop will let you get even more value from the data you’re sending to Splunk Observability Cloud. Now that you’ve completed this workshop, you have the knowledge you need to start collecting tags from your own applications!

To get started with capturing tags today, check out how to add tags in various supported languages, and then how to use them to create Troubleshooting MetricSets so they can be analyzed in Tag Spotlight. For more help, feel free to ask a Splunk Expert.

Tip for Workshop Facilitator(s)

Once the workshop is complete, remember to delete the APM MetricSet you created earlier for the credit.score.category tag.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Profiling Workshop

2 minutes   Author Derek Mitchell

Service Maps and Traces are extremely valuable in determining what service an issue resides in. And related log data helps provide detail on why issues are occurring in that service.

But engineers sometimes need to go even deeper to debug a problem that’s occurring in one of their services.

This is where features such as Splunk’s AlwaysOn Profiling and Database Query Performance come in.

AlwaysOn Profiling continuously collects stack traces so that you can discover which lines in your code are consuming the most CPU and memory.

And Database Query Performance can quickly identify long-running, unoptimized, or heavy queries and mitigate issues they might be causing.

In this workshop, we’ll explore:

  • How to debug an application with several performance issues.
  • How to use Database Query Performance to find slow-running queries that impact application performance.
  • How to enable AlwaysOn Profiling and use it to find the code that consumes the most CPU and memory.
  • How to apply fixes based on findings from Splunk Observability Cloud and verify the result.

The workshop uses a Java-based application called The Door Game hosted in Kubernetes. Let’s get started!

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Author Derek Mitchell   Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Profiling Workshop

Build the Sample Application

10 minutes  

Introduction

For this workshop, we’ll be using a Java-based application called The Door Game. It will be hosted in Kubernetes.

Pre-requisites

You will start with an EC2 instance and perform some initial steps in order to get to the following state:

  • Deploy the Splunk distribution of the OpenTelemetry Collector
  • Deploy the MySQL database container and populate data
  • Build and deploy the doorgame application container

Initial Steps

The initial setup can be completed by executing the following steps on the command line of your EC2 instance.

You’ll be asked to enter a name for your environment. Please use profiling-workshop-yourname (where yourname is replaced by your actual name).

cd workshop/profiling
./1-deploy-otel-collector.sh
./2-deploy-mysql.sh
./3-deploy-doorgame.sh

Let’s Play The Door Game

Now that the application is deployed, let’s play with it and generate some observability data.

Get the external IP address for your application instance using the following command:

kubectl describe svc doorgame | grep "LoadBalancer Ingress"

The output should look like the following:

LoadBalancer Ingress:     52.23.184.60

You should be able to access The Door Game application by pointing your browser to port 81 of the provided IP address. For example:

http://52.23.184.60:81

You should be met with The Door Game intro screen:

Door Game Welcome Screen Door Game Welcome Screen

Click Let's Play to start the game:

Let’s Play Let’s Play

Did you notice that it took a long time after clicking Let's Play before we could actually start playing the game?

Let’s use Splunk Observability Cloud to determine why the application startup is so slow.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Troubleshoot Game Startup

10 minutes  

Let’s use Splunk Observability Cloud to determine why the game started so slowly.

View your application in Splunk Observability Cloud

Note: when the application is deployed for the first time, it may take a few minutes for the data to appear.

Navigate to APM, then use the Environment dropdown to select your environment (i.e. profiling-workshop-name).

If everything was deployed correctly, you should see doorgame displayed in the list of services:

APM Overview APM Overview

Click on Explore on the right-hand side to view the service map. We should the doorgame application on the service map:

Service Map Service Map

Notice how the majority of the time is being spent in the MySQL database. We can get more details by clicking on Database Query Performance on the right-hand side.

Database Query Performance Database Query Performance

This view shows the SQL queries that took the most amount of time. Ensure that the Compare to dropdown is set to None, so we can focus on current performance.

We can see that one query in particular is taking a long time:

select * from doorgamedb.users, doorgamedb.organizations

(do you notice anything unusual about this query?)

Let’s troubleshoot further by clicking on one of the spikes in the latency graph. This brings up a list of example traces that include this slow query:

Traces with Slow Query Traces with Slow Query

Click on one of the traces to see the details:

Trace with Slow Query Trace with Slow Query

In the trace, we can see that the DoorGame.startNew operation took 25.8 seconds, and 17.6 seconds of this was associated with the slow SQL query we found earlier.

What did we accomplish?

To recap what we’ve done so far:

  • We’ve deployed our application and are able to access it successfully.
  • The application is sending traces to Splunk Observability Cloud successfully.
  • We started troubleshooting the slow application startup time, and found a slow SQL query that seems to be the root cause.

To troubleshoot further, it would be helpful to get deeper diagnostic data that tells us what’s happening inside our JVM, from both a memory (i.e. JVM heap) and CPU perspective. We’ll tackle that in the next section of the workshop.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Enable AlwaysOn Profiling

20 minutes  

Let’s learn how to enable the memory and CPU profilers, verify their operation, and use the results in Splunk Observability Cloud to find out why our application startup is slow.

Update the application configuration

We will need to pass additional configuration arguments to the Splunk OpenTelemetry Java agent in order to enable both profilers. The configuration is documented here in detail, but for now we just need the following settings:

SPLUNK_PROFILER_ENABLED="true"
SPLUNK_PROFILER_MEMORY_ENABLED="true"

Since our application is deployed in Kubernetes, we can update the Kubernetes manifest file to set these environment variables. Open the doorgame/doorgame.yaml file for editing, and ensure the values of the following environment variables are set to “true”:

- name: SPLUNK_PROFILER_ENABLED
  value: "true"
- name: SPLUNK_PROFILER_MEMORY_ENABLED
  value: "true"

Next, let’s redeploy the Door Game application by running the following command:

cd workshop/profiling
kubectl apply -f doorgame/doorgame.yaml

After a few seconds, a new pod will be deployed with the updated application settings.

Confirm operation

To ensure the profiler is enabled, let’s review the application logs with the following commands:

kubectl logs -l app=doorgame --tail=100 | grep JfrActivator

You should see a line in the application log output that shows the profiler is active:

[otel.javaagent 2024-02-05 19:01:12:416 +0000] [main] INFO com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active.```

This confirms that the profiler is enabled and sending data to the OpenTelemetry collector deployed in our Kubernetes cluster, which in turn sends profiling data to Splunk Observability Cloud.

Profiling in APM

Visit http://<your IP address>:81 and play a few more rounds of The Door Game.

Then head back to Splunk Observability Cloud, click on APM, and click on the doorgame service at the bottom of the screen.

Click on “Traces” on the right-hand side to load traces for this service. Filter on traces involving the doorgame service and the GET new-game operation (since we’re troubleshooting the game startup sequence):

New Game Traces New Game Traces

Selecting one of these traces brings up the following screen:

Trace with Call Stacks Trace with Call Stacks

You can see that the spans now include “Call Stacks”, which is a result of us enabling CPU and memory profiling earlier.

Click on the span named doorgame: SELECT doorgamedb, then click on CPU stack traces on the right-hand side:

Trace with CPU Call Stacks Trace with CPU Call Stacks

This brings up the CPU call stacks captured by the profiler.

Let’s open the AlwaysOn Profiler to review the CPU stack trace in more detail. We can do this by clicking on the Span link beside View in AlwaysOn Profiler:

Flamegraph and table Flamegraph and table

The AlwaysOn Profiler includes both a table and a flamegraph. Take some time to explore this view by doing some of the following:

  • click a table item and notice the change in flamegraph
  • navigate the flamegraph by clicking on a stack frame to zoom in, and a parent frame to zoom out
  • add a search term like splunk or jetty to highlight some matching stack frames

Let’s have a closer look at the stack trace, starting with the DoorGame.startNew method (since we already know that it’s the slowest part of the request)

com.splunk.profiling.workshop.DoorGame.startNew(DoorGame.java:24)
com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)
com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1168)
com.mysql.cj.NativeSession.execSQL(NativeSession.java:655)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:998)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:1065)
com.mysql.cj.protocol.a.NativeProtocol.readAllResults(NativeProtocol.java:1715)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1661)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:48)
com.mysql.cj.protocol.a.TextResultsetReader.read(TextResultsetReader.java:87)
com.mysql.cj.protocol.a.NativeProtocol.read(NativeProtocol.java:1648)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:42)
com.mysql.cj.protocol.a.ResultsetRowReader.read(ResultsetRowReader.java:75)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:44)
com.mysql.cj.protocol.a.MultiPacketReader.readMessage(MultiPacketReader.java:66)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:41)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readMessage(TimeTrackingPacketReader.java:62)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:45)
com.mysql.cj.protocol.a.SimplePacketReader.readMessage(SimplePacketReader.java:102)
com.mysql.cj.protocol.a.SimplePacketReader.readMessageLocal(SimplePacketReader.java:137)
com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:64)
java.io.FilterInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl$AppInputStream.read(Unknown Source:0)
sun.security.ssl.SSLSocketImpl.readApplicationRecord(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source:0)
sun.security.ssl.SSLSocketInputRecord.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.net.SocketInputStream.read(Unknown Source:0)
java.lang.ThreadLocal.get(Unknown Source:0)

We can interpret the stack trace as follows:

  • When starting a new Door Game, a call is made to load user data.
  • This results in executing a SQL query to load the user data (which is related to the slow SQL query we saw earlier).
  • We then see calls to read data in from the database.

So, what does this all mean? It means that our application startup is slow since it’s spending time loading user data. In fact, the profiler has told us the exact line of code where this happens:

com.splunk.profiling.workshop.UserData.loadUserData(UserData.java:33)

Let’s open the corresponding source file (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and look at this code in more detail:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

Here we can see the application logic in action. It establishes a connection to the database, then executes the SQL query we saw earlier:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

It then loops through each of the results, and loads each user into a HashMap object, which is a collection of User objects.

We have a good understanding of why the game startup sequence is so slow, but how do we fix it?

For more clues, let’s have a look at the other part of AlwaysOn Profiling: memory profiling. To do this, click on the Memory tab in AlwaysOn profiling:

Memory Profiling Memory Profiling

At the top of this view, we can see how much heap memory our application is using, the heap memory allocation rate, and garbage collection activity.

We can see that our application is using about 400 MB out of the max 1 GB heap size, which seems excessive for such a simple application. We can also see that some garbage collection occurred, which caused our application to pause (and probably annoyed those wanting to play the Game Door).

At the bottom of the screen, which can see which methods in our Java application code are associated with the most heap memory usage. Click on the first item in the list to show the Memory Allocation Stack Traces associated with the java.util.Arrays.copyOf method specifically:

Memory Allocation Stack Traces Memory Allocation Stack Traces

With help from the profiler, we can see that the loadUserData method not only consumes excessive CPU time, but it also consumes excessive memory when storing the user data in the HashMap collection object.

What did we accomplish?

We’ve come a long way already!

  • We learned how to enable the profiler in the Splunk OpenTelemetry Java instrumentation agent.
  • We learned how to verify in the agent output that the profiler is enabled.
  • We have explored several profiling related workflows in APM:
    • How to navigate to AlwaysOn Profiling from the troubleshooting view
    • How to explore the flamegraph and method call duration table through navigation and filtering
    • How to identify when a span has sampled call stacks associated with it
    • How to explore heap utilization and garbage collection activity
    • How to view memory allocation stack traces for a particular method

In the next section, we’ll apply a fix to our application to resolve the slow startup performance.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Fix Application Startup Slowness

10 minutes  

In this section, we’ll use what we learned from the profiling data in Splunk Observability Cloud to resolve the slowness we saw when starting our application.

Examining the Source Code

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and focus on the following code:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

After speaking with a database engineer, you discover that the SQL query being executed includes a cartesian join:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

Cartesian joins are notoriously slow, and shouldn’t be used, in general.

Upon further investigation, you discover that there are 10,000 rows in the user table, and 1,000 rows in the organization table. When we execute a cartesian join using both of these tables, we end up with 10,000 x 1,000 rows being returned, which is 10,000,000 rows!

Furthermore, the query ends up returning duplicate user data, since each record in the user table is repeated for each organization.

So when our code executes this query, it tries to load 10,000,000 user objects into the HashMap, which explains why it takes so long to execute, and why it consumes so much heap memory.

Let’s Fix That Bug

After consulting the engineer that originally wrote this code, we determined that the join with the Organizations table was inadvertent.

So when loading the users into the HashMap, we simply need to remove this table from the query.

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and change the following line of code:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

to:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users";

Now the method should perform much more quickly, and less memory should be used, as it’s loading the correct number of users into the HashMap (10,000 instead of 10,000,000).

Rebuild and Redeploy Application

Let’s test our changes by using the following commands to re-build and re-deploy the Door Game application:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

Clicking Let's Play should take us to the game more quickly now (though performance could still be improved):

Choose Door Choose Door

Start the game a few more times, then return to Splunk Observability Cloud to confirm that the latency of the GET new-game operation has decreased.

What did we accomplish?

  • We discovered why our SQL query was so slow.
  • We applied a fix, then rebuilt and redeployed our application.
  • We confirmed that the application starts a new game more quickly.

In the next section, we’ll explore continue playing the game and fix any remaining performance issues that we find.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Fix In Game Slowness

10 minutes  

Now that our game startup slowness has been resolved, let’s play several rounds of the Door Game and ensure the rest of the game performs quickly.

As you play the game, do you notice any other slowness? Let’s look at the data in Splunk Observability Cloud to put some numbers on what we’re seeing.

Review Game Performance in Splunk Observability Cloud

Navigate to APM then click on Traces on the right-hand side of the screen. Sort the traces by Duration in descending order:

Slow Traces Slow Traces

We can see that a few of the traces with an operation of GET /game/:uid/picked/:picked/outcome have a duration of just over five seconds. This explains why we’re still seeing some slowness when we play the app (note that the slowness is no longer on the game startup operation, GET /new-game, but rather a different operation used while actually playing the game).

Let’s click on one of the slow traces and take a closer look. Since profiling is still enabled, call stacks have been captured as part of this trace. Click on the child span in the waterfall view, then click CPU Stack Traces:

View Stack on Span View Stack on Span

At the bottom of the call stack, we can see that the thread was busy sleeping:

com.splunk.profiling.workshop.ServiceMain$$Lambda$.handle(Unknown Source:0)
com.splunk.profiling.workshop.ServiceMain.lambda$main$(ServiceMain.java:34)
com.splunk.profiling.workshop.DoorGame.getOutcome(DoorGame.java:41)
com.splunk.profiling.workshop.DoorChecker.isWinner(DoorChecker.java:14)
com.splunk.profiling.workshop.DoorChecker.checkDoorTwo(DoorChecker.java:30)
com.splunk.profiling.workshop.DoorChecker.precheck(DoorChecker.java:36)
com.splunk.profiling.workshop.Util.sleep(Util.java:9)
java.util.concurrent.TimeUnit.sleep(Unknown Source:0)
java.lang.Thread.sleep(Unknown Source:0)
java.lang.Thread.sleep(Native Method:0)

The call stack tells us a story – reading from the bottom up, it lets us describe what is happening inside the service code. A developer, even one unfamiliar with the source code, should be able to look at this call stack to craft a narrative like:

We are getting the outcome of a game. We leverage the DoorChecker to see if something is the winner, but the check for door two somehow issues a precheck() that, for some reason, is deciding to sleep for a long time.

Our workshop application is left intentionally simple – a real-world service might see the thread being sampled during a database call or calling into an un-traced external service. It is also possible that a slow span is executing a complicated business process, in which case maybe none of the stack traces relate to each other at all.

The longer a method or process is, the greater chance we will have call stacks sampled during its execution.

Let’s Fix That Bug

By using the profiling tool, we were able to determine that our application is slow when issuing the DoorChecker.precheck() method from inside DoorChecker.checkDoorTwo(). Let’s open the doorgame/src/main/java/com/splunk/profiling/workshop/DoorChecker.java source file in our editor.

By quickly glancing through the file, we see that there are methods for checking each door, and all of them call precheck(). In a real service, we might be uncomfortable simply removing the precheck() call because there could be unseen/unaccounted side effects.

Down on line 29 we see the following:

    private boolean checkDoorTwo(GameInfo gameInfo) {
        precheck(2);
        return gameInfo.isWinner(2);
    }

    private void precheck(int doorNum) {
        long extra = (int)Math.pow(70, doorNum);
        sleep(300 + extra);
    }

With our developer hat on, we notice that the door number is zero based, so the first door is 0, the second is 1, and the 3rd is 2 (this is conventional). The extra value is used as extra/additional sleep time, and it is computed by taking 70^doorNum (Math.pow performs an exponent calculation). That’s odd, because this means:

  • door 0 => 70^0 => 1ms
  • door 1 => 70^1 => 70ms
  • door 2 => 70^2 => 4900ms

We’ve found the root cause of our slow bug! This also explains why the first two doors weren’t ever very slow.

We have a quick chat with our product manager and team lead, and we agree that the precheck() method must stay but that the extra padding isn’t required. Let’s remove the extra variable and make precheck now read like this:

    private void precheck(int doorNum) {
        sleep(300);
    }

Now all doors will have a consistent behavior. Save your work and then rebuild and redeploy the application using the following command:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

What did we accomplish?

  • We found another performance issue with our application that impacts game play.
  • We used the CPU call stacks included in the trace to understand application behavior.
  • We learned how the call stack can tell us a story and point us to suspect lines of code.
  • We identified the slow code and fixed it to make it faster.
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Summary

3 minutes  

In this workshop, we accomplished the following:

  • We deployed our application and captured traces with Splunk Observability Cloud.
  • We used Database Query Performance to find a slow-running query that impacted the game startup time.
  • We enabled AlwaysOn Profiling and used it to confirm which line of code was causing the increased startup time and memory usage.
  • We found another application performance issue and used AlwaysOn Profiling again to find the problematic line of code.
  • We applied fixes for both of these issues and verified the result using Splunk Observability Cloud.

Enabling AlwaysOn Profiling and utilizing Database Query Performance for your applications will let you get even more value from the data you’re sending to Splunk Observability Cloud.

Now that you’ve completed this workshop, you have the knowledge you need to start collecting deeper diagnostic data from your own applications!

To get started with Database Query Performance today, check out Monitor Database Query Performance.

And to get started with AlwaysOn Profiling, check out Introduction to AlwaysOn Profiling for Splunk APM

For more help, feel free to ask a Splunk Expert.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Fix Application Startup Slowness

10 minutes  

In this section, we’ll use what we learned from the profiling data in Splunk Observability Cloud to resolve the slowness we saw when starting our application.

Examining the Source Code

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and focus on the following code:

public class UserData {

    static final String DB_URL = "jdbc:mysql://mysql/DoorGameDB";
    static final String USER = "root";
    static final String PASS = System.getenv("MYSQL_ROOT_PASSWORD");
    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

    HashMap<String, User> users;

    public UserData() {
        users = new HashMap<String, User>();
    }

    public void loadUserData() {

        // Load user data from the database and store it in a map
        Connection conn = null;
        Statement stmt = null;
        ResultSet rs = null;

        try{
            conn = DriverManager.getConnection(DB_URL, USER, PASS);
            stmt = conn.createStatement();
            rs = stmt.executeQuery(SELECT_QUERY);
            while (rs.next()) {
                User user = new User(rs.getString("UserId"), rs.getString("FirstName"), rs.getString("LastName"));
                users.put(rs.getString("UserId"), user);
            }

After speaking with a database engineer, you discover that the SQL query being executed includes a cartesian join:

select * FROM DoorGameDB.Users, DoorGameDB.Organizations

Cartesian joins are notoriously slow, and shouldn’t be used, in general.

Upon further investigation, you discover that there are 10,000 rows in the user table, and 1,000 rows in the organization table. When we execute a cartesian join using both of these tables, we end up with 10,000 x 1,000 rows being returned, which is 10,000,000 rows!

Furthermore, the query ends up returning duplicate user data, since each record in the user table is repeated for each organization.

So when our code executes this query, it tries to load 10,000,000 user objects into the HashMap, which explains why it takes so long to execute, and why it consumes so much heap memory.

Let’s Fix That Bug

After consulting the engineer that originally wrote this code, we determined that the join with the Organizations table was inadvertent.

So when loading the users into the HashMap, we simply need to remove this table from the query.

Open the corresponding source file once again (./doorgame/src/main/java/com/splunk/profiling/workshop/UserData.java) and change the following line of code:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users, DoorGameDB.Organizations";

to:

    static final String SELECT_QUERY = "select * FROM DoorGameDB.Users";

Now the method should perform much more quickly, and less memory should be used, as it’s loading the correct number of users into the HashMap (10,000 instead of 10,000,000).

Rebuild and Redeploy Application

Let’s test our changes by using the following commands to re-build and re-deploy the Door Game application:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

Clicking Let's Play should take us to the game more quickly now (though performance could still be improved):

Choose Door Choose Door

Start the game a few more times, then return to Splunk Observability Cloud to confirm that the latency of the GET new-game operation has decreased.

What did we accomplish?

  • We discovered why our SQL query was so slow.
  • We applied a fix, then rebuilt and redeployed our application.
  • We confirmed that the application starts a new game more quickly.

In the next section, we’ll explore continue playing the game and fix any remaining performance issues that we find.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Fix In Game Slowness

10 minutes  

Now that our game startup slowness has been resolved, let’s play several rounds of the Door Game and ensure the rest of the game performs quickly.

As you play the game, do you notice any other slowness? Let’s look at the data in Splunk Observability Cloud to put some numbers on what we’re seeing.

Review Game Performance in Splunk Observability Cloud

Navigate to APM then click on Traces on the right-hand side of the screen. Sort the traces by Duration in descending order:

Slow Traces Slow Traces

We can see that a few of the traces with an operation of GET /game/:uid/picked/:picked/outcome have a duration of just over five seconds. This explains why we’re still seeing some slowness when we play the app (note that the slowness is no longer on the game startup operation, GET /new-game, but rather a different operation used while actually playing the game).

Let’s click on one of the slow traces and take a closer look. Since profiling is still enabled, call stacks have been captured as part of this trace. Click on the child span in the waterfall view, then click CPU Stack Traces:

View Stack on Span View Stack on Span

At the bottom of the call stack, we can see that the thread was busy sleeping:

com.splunk.profiling.workshop.ServiceMain$$Lambda$.handle(Unknown Source:0)
com.splunk.profiling.workshop.ServiceMain.lambda$main$(ServiceMain.java:34)
com.splunk.profiling.workshop.DoorGame.getOutcome(DoorGame.java:41)
com.splunk.profiling.workshop.DoorChecker.isWinner(DoorChecker.java:14)
com.splunk.profiling.workshop.DoorChecker.checkDoorTwo(DoorChecker.java:30)
com.splunk.profiling.workshop.DoorChecker.precheck(DoorChecker.java:36)
com.splunk.profiling.workshop.Util.sleep(Util.java:9)
java.util.concurrent.TimeUnit.sleep(Unknown Source:0)
java.lang.Thread.sleep(Unknown Source:0)
java.lang.Thread.sleep(Native Method:0)

The call stack tells us a story – reading from the bottom up, it lets us describe what is happening inside the service code. A developer, even one unfamiliar with the source code, should be able to look at this call stack to craft a narrative like:

We are getting the outcome of a game. We leverage the DoorChecker to see if something is the winner, but the check for door two somehow issues a precheck() that, for some reason, is deciding to sleep for a long time.

Our workshop application is left intentionally simple – a real-world service might see the thread being sampled during a database call or calling into an un-traced external service. It is also possible that a slow span is executing a complicated business process, in which case maybe none of the stack traces relate to each other at all.

The longer a method or process is, the greater chance we will have call stacks sampled during its execution.

Let’s Fix That Bug

By using the profiling tool, we were able to determine that our application is slow when issuing the DoorChecker.precheck() method from inside DoorChecker.checkDoorTwo(). Let’s open the doorgame/src/main/java/com/splunk/profiling/workshop/DoorChecker.java source file in our editor.

By quickly glancing through the file, we see that there are methods for checking each door, and all of them call precheck(). In a real service, we might be uncomfortable simply removing the precheck() call because there could be unseen/unaccounted side effects.

Down on line 29 we see the following:

    private boolean checkDoorTwo(GameInfo gameInfo) {
        precheck(2);
        return gameInfo.isWinner(2);
    }

    private void precheck(int doorNum) {
        long extra = (int)Math.pow(70, doorNum);
        sleep(300 + extra);
    }

With our developer hat on, we notice that the door number is zero based, so the first door is 0, the second is 1, and the 3rd is 2 (this is conventional). The extra value is used as extra/additional sleep time, and it is computed by taking 70^doorNum (Math.pow performs an exponent calculation). That’s odd, because this means:

  • door 0 => 70^0 => 1ms
  • door 1 => 70^1 => 70ms
  • door 2 => 70^2 => 4900ms

We’ve found the root cause of our slow bug! This also explains why the first two doors weren’t ever very slow.

We have a quick chat with our product manager and team lead, and we agree that the precheck() method must stay but that the extra padding isn’t required. Let’s remove the extra variable and make precheck now read like this:

    private void precheck(int doorNum) {
        sleep(300);
    }

Now all doors will have a consistent behavior. Save your work and then rebuild and redeploy the application using the following command:

cd workshop/profiling
./5-redeploy-doorgame.sh

Once the application has been redeployed successfully, visit The Door Game again to confirm that your fix is in place: http://<your IP address>:81

What did we accomplish?

  • We found another performance issue with our application that impacts game play.
  • We used the CPU call stacks included in the trace to understand application behavior.
  • We learned how the call stack can tell us a story and point us to suspect lines of code.
  • We identified the slow code and fixed it to make it faster.
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Summary

3 minutes  

In this workshop, we accomplished the following:

  • We deployed our application and captured traces with Splunk Observability Cloud.
  • We used Database Query Performance to find a slow-running query that impacted the game startup time.
  • We enabled AlwaysOn Profiling and used it to confirm which line of code was causing the increased startup time and memory usage.
  • We found another application performance issue and used AlwaysOn Profiling again to find the problematic line of code.
  • We applied fixes for both of these issues and verified the result using Splunk Observability Cloud.

Enabling AlwaysOn Profiling and utilizing Database Query Performance for your applications will let you get even more value from the data you’re sending to Splunk Observability Cloud.

Now that you’ve completed this workshop, you have the knowledge you need to start collecting deeper diagnostic data from your own applications!

To get started with Database Query Performance today, check out Monitor Database Query Performance.

And to get started with AlwaysOn Profiling, check out Introduction to AlwaysOn Profiling for Splunk APM

For more help, feel free to ask a Splunk Expert.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Optimize End User Experiences

90 minutes   Author Sarah Ware

How can we use Splunk Observability to get insight into end user experience, and proactively test scenarios to improve that experience?

Sections:

  • Set up basic Synthetic tests to understand availability and performance ASAP
    • Uptime test
    • API test
    • Single page Browser test
  • Explore RUM to understand our real users
  • Write advanced Synthetics tests based on what we’ve learned about our users and what we need them to do
  • Customize dashboard charts to capture our KPIs, show trends, and show data in context of our events
  • Create Detectors to alert on our KPIs
Tip

Keep in mind throughout the workshop: how can I prioritize activities strategically to get the fastest time to value for my end users and for myself/ my developers?

Context

As a reminder, we need frontend performance monitoring to capture everything that goes into our end user experience. If we’re just monitoring the backend, we’re missing all of the other resources that are critical to our users’ success. Read What the Fastly Outage Can Teach Us About Observability for a real world example. Click the image below to zoom in. What goes into the front end What goes into the front end

References

Throughout this workshop, we will see references to resources to help further understand end user experience and how to optimize it. In addition to Splunk Docs for supported features and Lantern for tips and tricks, Google’s web.dev and Mozilla are great resources.

Remember that the specific libraries, platforms, and CDNs you use often also have their own specific resources. For example React, Wordpress, and Cloudflare all have their own tips to improve performance.

Author Sarah Ware   Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Optimize End User Experiences

Synthetics

Let’s quickly set up some tests in Synthetics to immediately start understanding our end user experience, without waiting for real users to interact with our app.

We can capture not only the performance and availability of our own apps and endpoints, but also those third parties we rely on any time of the day or night.

Tip

If you find that your tests are being bot-blocked, see the docs for tips on how to allow Synthetic testing. if you need to test something that is not accessible externally, see private location instructions.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Synthetics

Uptime Test

5 minutes  

Introduction

The simplest way to keep an eye on endpoint availability is with an Uptime test. This lightweight test can run internally or externally around the world, as frequently as every minute. Because this is the easiest (and cheapest!) test to set up, and because this is ideal for monitoring availability of your most critical enpoints and ports, let’s start here.

Pre-requisites

  • Publicly accessible HTTP(S) endpoint(s) to test
  • Access to Splunk Observability Cloud
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Uptime Test

Creating a test

  1. Open Synthetics o11y nav with Synthetics icon highlighted

  2. Click the Add new test button on the right side of the screen, then select Uptime and HTTP test. image image

  3. Name your test with your team name (provided by your workshop instructor), your initials, and any other details you’d like to include, like geographic region.

  4. For now let’s test a GET request. Fill in the URL field. You can use one of your own, or one of ours like https://online-boutique-eu.splunko11y.com, https://online-boutique-us.splunko11y.com, or https://www.splunk.com.

  5. Click Try now to validate that the endpoint is accessible before the selected location before saving the test. Try now does not count against your subscription usage, so this is a good practice to make sure you’re not wasting real test runs on a misconfigured test. image image

    Tip

    A common reason for Try now to fail is that there is a non-2xx response code. If that is expected, add a Validation for the correct response code.

  6. Add any additional validations needed, for example: response code, response header, and response size. Advanced settings for test configuration Advanced settings for test configuration

  7. Add and remove any locations you’d like. Keep in mind where you expect your endpoint to be available.

  8. Change the frequency to test your more critical endpoints more often, up to one minute. image image

  9. Make sure “Round-robin” is on so the test will run from one location at a time, rather than from all locations at once.

    • If an endpoint is highly critical, think about if it is worth it to have all locations tested at the same time every single minute. If you have automations built in with a webhook from a detector, or if you have strict SLAs you need to track, this could be worth it to have as much coverage as possible. But if you are doing more manual investigation, or if this is a less critical endpoint, you could be wasting test runs that are executing while an issue is being investigated.
    • Remember that your license is based on the number of test runs per month. Turning Round-robin off will multiply the number of test runs by the number of locations you have selected.
  10. When you are ready for the test to start running, make sure “Active” is on, then scroll down and click Submit to save the test configuration. image image

Now the test will start running with your saved configuration. Take a water break, then we’ll look at the results!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Understanding results

  1. Click into a test summary view and play with the Performance KPIs chart filters to see how you can slice and dice your data. This is a good place to get started understanding trends. Later, we will see what custom charts look like, so you can tailor dashboards to the KPIs you care about most. KPI chart filters KPI chart filters

    Workshop Question: Using the Performance KPIs chart

    What metrics are available? Is your data consistent across time and locations? Do certain locations run slower than others? Are there any spikes or failures?

  2. Click into a recent run either in the chart or in the table below. image image

  3. If there are failures, look at the response to see if you need to add a response code assertion (302 is a common one), if there is some authorization needed, or different request headers added. Here we have information about this particular test run including if it succeeded or failed, the location, timestamp, and duration in addition to the other Uptime test metrics. Click through to see the response, request, and connection info as well. image image If you need to edit the test for it to run successfully, click the test name in the top left breadcrumb on this run result page, then click Edit test on the top right of the test overview page. Remember to scroll down and click Submit to save your changes after editing the test configuration.

  4. In addition to the test running successfully, there are other metrics to measure the health of your endpoints. For example, Time to First Byte(TTFB) is a great indicator of performance, and you can optimize TTFB to improve end user experience.

  5. Go back to the test overview page and change the Performance KPIs chart to display First Byte time, and change the interval if needed to better see trends in the data. Performance KPIs for Uptime Tests Performance KPIs for Uptime Tests In the example above, we can see that TTFB varies consistently between locations. Knowing this, we can keep location in mind when reporting on metrics. We could also improve the experience, for example by serving users in those locations an endpoint hosted closer to them, which should reduce network latency. We can also see some slight variations in the results over time, but overall we already have a good idea of our baseline for this endpoint’s KPIs. When we have a baseline, we can alert on worsening metrics as well as visualize improvements.

    Tip

    We are not setting a detector on this test yet, to make sure it is running consistently and successfully. If you are testing a highly critical endpoint and want to be alerted on it ASAP (and have tolerance for potential alert noise), jump to Single Test Detectors.

Once you have your Uptime test running successfully, let’s move on to the next test type.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

API Test

5 minutes  

The API test provides a flexible way to check the functionality and performance of API endpoints. The shift toward API-first development has magnified the necessity to monitor the back-end services that provide your core front-end functionality.

Whether you’re interested in testing multi-step API interactions or you want to gain visibility into the performance of your endpoints, the API Test can help you accomplish your goals.

This excercise will walk through a multi-step test on the Spotify API. You can also use it as a reference to build tests on your own APIs or on those of your critical third parties.

API test result API test result

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of API Test

Global Variables

Global variables allow us to use stored strings in multiple tests, so we only need to update them in one place.

View the global variable that we’ll use to perform our API test. Click on Global Variables under the cog icon. The global variable named env.encoded_auth will be the one that we’ll use to build the spotify API transaction.

placeholder placeholder

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Create new API test

Create a new API test by clicking on the Add new test button and select API test from the dropdown. Name the test using your team name, your initials, and Spotify API e.g. [Daisy] RWC - Spotify API

placeholder placeholder

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Authentication Request

Click on + Add requests and enter the request step name e.g. Authenticate with Spotify API.

placeholder placeholder

Expand the Request section, from the drop-down change the request method to POST and enter the following URL:

https://accounts.spotify.com/api/token

In the Payload body section enter the following:

grant_type=client_credentials

Next, add two + Request headers with the following key/value pairings:

  • CONTENT-TYPE: application/x-www-form-urlencoded
  • AUTHORIZATION: Basic {{env.encoded_auth}}

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.access_token as access_token

This will parse the JSON payload that is received from the Spotify API, extract the access token and store it as a custom variable.

Add payload token Add payload token

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Search Request

Click on + Add Request to add the next step. Name the step Search for Tracks named “Up around the bend”.

Expand the Request section and change the request method to GET and enter the following URL:

https://api.spotify.com/v1/search?q=Up%20around%20the%20bend&type=track&offset=0&limit=5

Next, add two request headers with the following key/value pairings:

  • CONTENT-TYPE: application/json
  • AUTHORIZATION: Bearer {{custom.access_token}}
    • This uses the custom variable we created in the previous step!

Add search request Add search request

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.tracks.items[0].id as track.id

Add search payload Add search payload

To validate the test before saving, change the location as needed and click Try now. See the docs for more information on the try now feature.

try now try now

When the validation is successful, click on < Return to test to return to the test configuration page. And then click Save to save the API test.

Extra credit

Have more time to work on this test? Take a look at the Response Body in one of your run results. What additional steps would make this test more thorough? Edit the test, and use the Try now feature to validate any changes you make before you save the test.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

View results

Wait for a few minutes for the test to provision and run. Once you see the test has run successfully, click on the run to view the results:

API test result API test result

Resources

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Single Page Browser Test

5 minutes  

We have started testing our endpoints, now let’s test the front end browser experience.

Starting with a single page browser test will let us capture how first- and third-party resources impact how our end users experience our browser-based site. It also allows us to start to understand our user experience metrics before introducing the complexity of multiple steps in one test.

A page where your users commonly “land” is a good choice to start with a single page test. This could be your site homepage, a section main page, or any other high-traffic URL that is important to you and your end users.

  1. Click Add new test and select Browser test

  2. Include your team name and initials in the test name. Add to the Name and Custom properties to describe the scope of the test (like Desktop for device type). Then click + Edit steps

  3. Change the transaction label (top left) and step name (on the right) to something readable that describes the step. Add the URL you’d like to test. Your workshop instructor can provide you with a URL as well. In the below example, the transaction is “Home” and the step name is “Go to homepage”.

    Transaction and step label Transaction and step label

  4. To validate the test, change the location as needed and click Try now. See the docs for more information on the try now feature.

  5. Wait for the test validation to complete. If the test validation failed, double check your URL and test location and try again. With Try now you can see what the result of the test will be if it were saved and run as-is.

    Try Now browser test results Try Now browser test results

  6. Click < Return to test to continue the configuration.

  7. Edit the locations you want to use, keeping in mind any regional rules you have for your site.

  8. You can edit the Device and Frequency or leave them at their default values for now. Click Submit to save the test and start running it.

Bonus Exercise

Have a few spare seconds? Copy this test and change just the title and device type, and save. Now you have visibility into the end user experience on another device and connection speed!

While our Synthetic tests are running, let’s see how RUM is instrumented to start getting data from our real users.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM

15 minutes  

With RUM instrumented, we will be able to better understand our end users, what they are doing, and what issues they are encountering.

This workshop walks through how our demo site is instrumented and how to interpret the data. If you already have a RUM license, this will help you understand how RUM works and how you can use it to optimize your end user experience.

Tip

Our Docs also contain guidance such as scenarios using Splunk RUM and demo applications to test out RUM for mobile apps.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of RUM

Overview

The aim of this Splunk Real User Monitoring (RUM) workshop is to let you:

  • Shop for items on the Online Boutique to create traffic, and create RUM User Sessions1 that you can view in the Splunk Observability Suite.
  • See an overview of the performance of all your application(s) in the Application Summary Dashboard
  • Examine the performance of a specific website with RUM metrics.

In order to reach this goal, we will use an online boutique to order various products. While shopping on the online boutique you will create what is called a User Session.

You may encounter some issues with this web site, and you will use Splunk RUM to identify the issues, so they can be resolved by the developers.

The workshop host will provide you with a URL for an online boutique store that has RUM enabled.

Each of these Online Boutiques are also being visited by a few synthetic users; this will allow us to generate more live data to be analyzed later.


  1. A RUM User session is a “recording” of a collection of user interactions on an application, basically collecting a website or app’s performance measured straight from the browser or Mobile App of the end user. To do this a small amount of JavaScript is embedded in each page. This script then collects data from each user as he or she explores the page, and transfers that data back for analysis. ↩︎

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM instrumentation in a browser app

  • Check the HEAD section of the Online-boutique webpage in your browser
  • Find the code that instruments RUM

1. Browse to the Online Boutique

Open the frontend demo site in your browser (EU or US).

2. Inspecting the HTML source

The changes needed for RUM are placed in the <head> section of the hosts Web page. Right click to view the page source or to inspect the code. Below is an example of the <head> section with RUM:

 4<!DOCTYPE html>
 5<html lang="en">
 6
 7<head>
 8    <meta charset="UTF-8">
 9    <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
10    <meta http-equiv="X-UA-Compatible" content="ie=edge">
11    <title>Online Boutique</title>
12    <link href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-WskhaSGFgHYWDcbwN70/dfYBj47jz9qbsMId/iRN3ewGhXQFZCSftd1LZCfmhktB"
13        crossorigin="anonymous">
14    <link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700" rel="stylesheet">
15    <link rel="stylesheet" type="text/css" href="/static/styles/styles.css">
16    <link rel="stylesheet" type="text/css" href="/static/styles/cart.css">
17    <link rel="stylesheet" type="text/css" href="/static/styles/order.css">
18    <link rel='shortcut icon' type='image/x-icon' href='/static/favicon.ico' />
19 
20 
21    <script src="https://cdn.signalfx.com/o11y-gdi-rum/latest/splunk-otel-web.js" crossorigin="anonymous"></script>
22    <script src="https://cdn.signalfx.com/o11y-gdi-rum/latest/splunk-otel-web-session-recorder.js" crossorigin="anonymous"></script>
23    <script>
24        window.SplunkRum && window.SplunkRum.init({ 
25            beaconUrl: 'https://rum-ingest.eu0.signalfx.com/v1/rum',
26            rumAuth: '<redacted>',
27            app: 'online-boutique-eu-store',
28            version: '1',
29            environment: 'online-boutique-eu'
30        });
31        SplunkSessionRecorder.init({
32            beaconUrl: 'https://rum-ingest.eu0.signalfx.com/v1/rumreplay',
33            rumAuth: '<redacted>'
34        });
35    </script>
36    <script>
37        const Provider = SplunkRum.provider; 
38        var tracer=Provider.getTracer('appModuleLoader');
39    </script>
40</head>
41
42<body>

This code enables RUM Tracing, Session Replay, and Custom Events to better understand performance in the context of user workflows:

  • The first part is to indicate where to download the Splunk Open Telemetry Javascript file from: https://cdn.signalfx.com/o11y-gdi-rum/latest/splunk-otel-web.js (this can also be hosted locally if so required).
  • The next section defines the location where to send the traces to in the beacon url: {beaconUrl: 'https://rum-ingest.eu0.signalfx.com/v1/rum'
  • The RUM Access Token: rumAuth: '<redacted>'.
  • Identification tags app and environment to indentify in the SPLUNK RUM UI e.g.
      <script> 
          window.SplunkRum && window.SplunkRum.init({
              beaconUrl: 'https://rum-ingest.eu0.signalfx.com/v1/rum',
              rumAuth: '<redacted>',
              app: 'online-boutique-eu-store',
              version: '1',
              environment: 'online-boutique-eu'
          });
          SplunkSessionRecorder.init({
              beaconUrl: 'https://rum-ingest.eu0.signalfx.com/v1/rumreplay',
              rumAuth: '<redacted>'
          });
      </script>

The above lines 21 and 23-30 are all that is required to enable RUM on your website!

Lines 22 and 31-34 are optional if you want Session Replay instrumented.

Line 36-39 var tracer=Provider.getTracer('appModuleLoader'); will add a Custom Event for every page change, allowing you to better track your website conversions and usage. This may or may not be instrumented for this workshop.

Exercise

Time to shop! Take a minute to open the workshop store URL in as many browsers and devices as you’d like, shop around, add items to cart, checkout, and feel free to close the shopping browsers when you’re finished. Keep in mind this is a lightweight demo shop site, so don’t be alarmed if the cart doesn’t match the item you picked!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM Landing Page

  • Visit the RUM landing page and and check the overview of the performance of all your RUM enabled applications with the Application Summary Dashboard (Both Mobile and Web based)

1. Visit the RUM Landing Page

Login into Splunk Observability. From the left side menu bar select RUM RUM-ico RUM-ico. This will bring you to your the RUM Landing Page.

The goal of this page is to give you in a single page, a clear indication of the health, performance and potential errors found in your application(s) and allow you to dive deeper into the information about your User Sessions collected from your web page/App. You will have a pane for each of your active RUM applications. (The view below is the default expanded view)

RUM-App-sum RUM-App-sum

If you have multiple applications, (which will be the case when every attendee is using their own ec2 instance for the RUM workshop), the pane view may be automatically reduced by collapsing the panes as shown below:

RUM-App-sum-collapsed RUM-App-sum-collapsed

You can expanded a condensed RUM Application Summary View to the full dashboard by clicking on the small browser RUM-browser RUM-browser or Mobile RUM-mobile RUM-mobileicon. (Depending on the type of application: Mobile or Browser based) on the left in front of the applications name, highlighted by the red arrow.

First find the right application to use for the workshop:

If you are participating in a stand alone RUM workshop, the workshop leader will tell you the name of the application to use, in the case of a combined workshop, it will follow the naming convention we used for IM and APM and use the ec2 node name as a unique id like jmcj-store as shown as the last app in the screenshot above.

2. Configure the RUM Application Summary Dashboard Header Section

RUM Application Summary Dashboard consists of 6 major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the Time Window you’re reviewing (You are looking at the past 15 minutes by default)
  • A drop down to select the Environment1 you want to look at. This allows you to focus on just the subset of applications belonging to that environment, or Select all to view all available.
  • A drop down list with the various Apps being monitored. You can use the one provided by the workshop host or select your own. This will focus you on just one application.
  • A drop down to select the Source, Browser or Mobile applications to view. For the Workshop leave All selected.
  • A hamburger menu located at the right of the header allowing you to configure some settings of your Splunk RUM application. (We will visit this in a later section).

RUM-SummaryHeader RUM-SummaryHeader

For the workshop lets do a deeper dive into the Application Summary screen in the next section: Check Health Browser Application


A common application deployment pattern is to have multiple, distinct application environments that don’t interact directly with each other but that are all being monitored by Splunk APM or RUM: for instance, quality assurance (QA) and production environments, or multiple distinct deployments in different datacenters, regions or cloud providers.


  1. A deployment environment is a distinct deployment of your system or application that allows you to set up configurations that don’t overlap with configurations in other deployments of the same application. Separate deployment environments are often used for different stages of the development process, such as development, staging, and production. ↩︎

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Check Browser Applications health at a glance

  • Get familiar with the UI and options available from this landing page
  • Identify Page Views/JavaScript Errors and Request/Errors in a single view
    Check the Web Vitals metrics and any Detector that has fired for in relation to your Browser Application

Application Summary Dashboard

1.Header Bar

As seen in the previous section the RUM Application Summary Dashboard consists of 5 major sections.
The first section is the selection header, where you can collapse the Pane via the RUM-browser RUM-browser Browser icon or the > in front of the application name, which is jmcj-store in the example below. It also provides access to the Application Overview page if you click the link with your application name which is jmcj-store in the example below.

Further, you can also open the Application Overview or App Health Dashboard via the triple dot trippleburger trippleburger menu on the right.

RUM-SummaryHeader RUM-SummaryHeader

For now, let’s look at the high level information we get on the application summary dashboard.

The RUM Application Summary Dashboard is focused on providing you with at a glance highlights of the status of your application.

2. Page Views / JavaScript Errors & Network Requests / Errors

The first section shows Page Views / JavaScript Errors, & Network Requests and Errors charts show the quantity and trend of these issues in your application. This could be Javascript errors, or failed network calls to back end services.

RUM-chart RUM-chart

In the example above you can see that there are no failed network calls in the Network chart, but in the Page View chart you can see that a number of pages do experience some errors. These are often not visible for regular users, but can seriously impact the performance of your web site.

You can see the count of the Page Views / Network Requests / Errors by hovering over the charts.

RUM-chart-clicked RUM-chart-clicked

3. JavaScript Errors

With the second section of the RUM Application Summary Dashboard we are showing you an overview of the JavaScript errors occurring in your application, along with a count of each error.

RUM-javascript RUM-javascript

In the example above you can see there are three JavaScript errors, one that appears 29 times in the selected time slot, and the other two each appear 12 times.

If you click on one of the errors a pop-out opens that will show a summary (below) of the errors over time, along with a Stack Trace of the JavaScript error, giving you an indication of where the problems occurred. (We will see this in more detail in one of the following sections)

RUM-javascript-chart RUM-javascript-chart

4. Web Vitals

The next section of the RUM Application Summary Dashboard is showing you Google’s Core Web Vitals, three metrics that are not only used by Google in its search ranking system, but also quantify end user experience in terms of loading, interactivity, and visual stability.

WEB-vitals WEB-vitals

As you can see our site is well behaved and scores Good for all three Metrics. These metrics can be used to identify the effect changes to your application have, and help you improve the performance of your site.

If you click on any of the Metrics shown in the Web Vitals pane you will be taken to the corresponding Tag Spotlight Dashboard. e.g. clicking on the Largest Contentful Paint (LCP) chartlet, you will be taken to a dashboard similar to the screen shot below, that gives you timeline and table views for how this metric has performed. This should allow you to spot trends and identify where the problem may be more common, such as an operating system, geolocation, or browser version.

WEB-vitals-tag WEB-vitals-tag

5. Most Recent Detectors

The final section of the RUM Application Summary Dashboard is focused on providing you an overview of recent detectors that have triggered for your application. We have created a detector for this screen shot but your pane will be empty for now. We will add some detectors to your site and make sure they are triggered in one of the next sections.

detectors detectors

In the screen shot you can see we have a critical alert for the RUM Aggregated View Detector, and a Count, how often this alert has triggered in the selected time window. If you happen to have an alert listed, you can click on the name of the Alert (that is shown as a blue link) and you will be taken to the Alert Overview page showing the details of the alert (Note: this will move you away from the current page, Please use the Back option of your browser to return to the overview page).

alert alert


Exercise

Please take a few minutes to experiment with the RUM Application Summary Dashboard and the underlying chart and dashboards before going on to the next section.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Metrics

  • See RUM Metrics and Session information in the RUM UI
  • See correlated APM traces in the RUM & APM UI

1. RUM Overview Pages

From your RUM Application Summary Dashboard you can see detailed information by opening the Application Overview Page via the tripple dot trippleburger trippleburger menu on the right by selecting Open Application Overview or by clicking the link with your application name which is jmcj-rum-app in the example below.

RUM-SummaryHeader RUM-SummaryHeader

This will take you to the RUM Application Overview Page screen as shown below.

RUM app overview with UX metrics RUM app overview with UX metrics

2. RUM Browser Overview

2.1. Header

The RUM UI consists of five major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the time window you’re reviewing (You are looking at the past 15 minutes in this case)
  • A drop down to select the Comparison window (You are comparing current performance on a rolling window - in this case compared to 1 hour ago)
  • A drop down with the available Environments to view
  • A drop down list with the Various Web apps
  • Optionally a drop down to select Browser or Mobile metrics (Might not be available in your workshop)

RUM-Header RUM-Header

2.2. UX Metrics

By default, RUM prioritizes the metrics that most directly reflect the experience of the end user.

Additional Tags

All of the dashboard charts allow us to compare trends over time, create detectors, and click through to further diagnose issues.

First, we see page load and route change information, which can help us understand if something unexpected is impacting user traffic trends.

Page load and route change charts Page load and route change charts

Next, Google has defined Core Web Vitals to quantify the user experience as measured by loading, interactivity, and visual stability. Splunk RUM builds in Google’s thresholds into the UI, so you can easily see if your metrics are in an acceptable range.

Core Web Vitals charts Core Web Vitals charts

  • Largest Contentful Paint (LCP), measures loading performance. How long does it take for the largest block of content in the viewport to load? To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID), measures interactivity. How long does it take to be able to interact with the app? To provide a good user experience, pages should have a FID of 100 milliseconds or less.
  • Cumulative Layout Shift (CLS), measures visual stability. How much does the content move around after the initial load? To provide a good user experience, pages should maintain a CLS of 0.1. or less.

Improving Web Vitals is a key component to optimizing your end user experience, so being able to quickly understand them and create detectors if they exceed a threshold is critical.

Google has some great resources if you want to learn more, for example the business impact of Core Web Vitals.

2.3. Front-end health

Common causes of frontend issues are javascript errors and long tasks, which can especially affect interactivity. Creating detectors on these indicators helps us investigate interactivity issues sooner than our users report it, allowing us to build workarounds or roll back related releases faster if needed. Learn more about optimizing long tasks for better end user experience!

JS error charts JS error charts Long task charts Long task charts

2.4. Back-end health

Common back-end issues affecting user experience are network issues and resource requests. In this example, we clearly see a spike in Time To First Byte that lines up with a resource request spike, so we already have a good starting place to investigate.

Back-end health charts Back-end health charts

  • Time To First Byte (TTFB), measures how long it takes for a client’s browser to receive the first byte of the response from the server. The longer it takes for the server to process the request and send a response, the slower your visitors’ browser is at displaying your page.
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Tags in the Tag Spotlight view

  • Look into the Metrics views for the various endpoints and use the Tags sent via the Tag spotlight for deeper analysis

1. Find an url for the Cart endpoint

From the RUM Overview page, please select the url for the Cart endpoint to dive deeper into the information available for this endpoint.

RUM-Cart2 RUM-Cart2

Once you have selected and clicked on the blue url, you will find yourself in the Tag Spotlight overview

RUM-Tag RUM-Tag

Here you will see all of the tags that have been sent to Splunk RUM as part of the RUM traces. The tags displayed will be relevant to the overview that you have selected. These are generic Tags created automatically when the Trace was sent, and additional Tags you have added to the trace as part of the configuration of your website.

Additional Tags

We are already sending two additional tags, you have seen them defined in the Beacon url that was added to your website: app: "[nodename]-store", environment: "[nodename]-workshop" in the first section of this workshop! You can add additional tags in a similar way.

In our example we have selected the Document Load Latency view as shown here:

RUM-Header RUM-Header

You can select any of the following Tag views, each focused on a specific metric.

RUM-views RUM-views


2. Explore the information in the Tag Spotlight view

The Tag spotlight is designed to help you identify problems, either through the chart view,, where you may quickly identify outliers or via the TAGs.

In the Document Load Latency view, if you look at the Browser, Browser Version & OS Name Tag views,you can see the various browser types and versions, as well as for the underlying OS.

This makes it easy to identify problems related to specific browser or OS versions, as they would be highlighted.

RUM-Tag2 RUM-Tag2

In the above example you can see that Firefox had the slowest response, various Browser versions ( Chrome) that have different response times and the slow response of the Android devices.

A further example are the regional Tags that you can use to identify problems related to ISP or locations etc. Here you should be able to find the location you have been using to access the Online Boutique. Drill down by selecting the town or country you are accessing the Online Boutique from by clicking on the name as shown below (City of Amsterdam):

RUM-click RUM-click

This will select only the traces relevant to the city selected as shown below:

RUM-Adam RUM-Adam

By selecting the various Tag you build up a filter, you can see the current selection below

RUM-Adam RUM-Adam

To clear the filter and see every trace click on Clear All at the top right of the page.

If the overview page is empty or shows RUM-Adam RUM-Adam, no traces have been received in the selected timeslot. You need to increase the time window at the top left. You can start with the Last 12 hours for example.

You can then use your mouse to select the time slot you want like show in the view below and activate that time filter by clicking on the little spyglass icon.

RUM-time RUM-time

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Sessions

  • Dive into RUM Session information in the RUM UI
  • Identify Javascript errors in the Span of an user interaction

1. Again select the cart URL

After you have focussed the time slot with the time selector, you need to reselect the cart url from Url Name view, as shown below:

RUM-Cart-3 RUM-Cart-3

In the example above we selected http://34.246.124.162:81/cart

2. Drill down in the Sessions

After you have analyzed the information and drilled down via the Tag Spotlight to a subset of the traces, you can view the actual session as it was run by the end-user’s browser.

You do this by clicking on the link User Sessions as shown below:

RUM-Header RUM-Header

This will give you a list of sessions that matched both the time filter and the subset selected in the Tag Profile.

Select one by clicking on the session ID, It is a good idea to select one that has the longest duration (preferably over 500 ms).

RUM-Header RUM-Header

Once you have selected the session, you will be taken to the session details page. As you are selecting a specific action that is part of the session, you will likely arrive somewhere in the middle of the session, at the moment of the interaction.

You can see the URL https://online-boutique-eu.splunko11y.com/cart, the one you selected earlier, is where we are focusing on in the session stream.

RUM-Session-Tag RUM-Session-Tag

Scroll down a little bit on the page, so you see the end of the operation as shown below.

RUM-Session-info RUM-Session-info

You can see that we have received a few Javascript Console errors that may not have been detected or visible to the end users. To examine these in more detail click on the middle one that says: Cannot read properties of undefined (reading 'Prcie')

RUM-Session-info RUM-Session-info

This will cause the page to expand and show the Span detail for this interaction, It will contain a detailed error.stack you can pass on the developer to solve the issue. You may have noticed when buying in the Online Boutique that the final total always was $0.00.

RUM-Session-info RUM-Session-info

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Advanced Synthetics

30 minutes  

Introduction

This workshop walks you through using the Chrome DevTools Recorder to create a synthetic test on a Splunk demonstration environment or on your own public website.

The exported JSON from the Chrome DevTools Recorder will then be used to create a Splunk Synthetic Monitoring Real Browser Test.

Pre-requisites

  • Google Chrome Browser installed
  • Publicly browser-accessible URL
  • Access to Splunk Observability Cloud

Supporting resources

  1. Lantern: advanced Selectors for multi-step browser tests
  2. Chrome for Developers DevTools Tips
  3. web.dev Core Web Vitals reference
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Advanced Synthetics

Record a test

Write down a short user journey you want to test. Remember: smaller bites are easier to chew! In other words, get started with just a few steps. This is easier not only to create and maintain the test, but also to understand and act on the results. Test the essential features to your users, like a support contact form, login widget, or date picker.

Note

Record the test in the same type of viewport that you want to run it. For example, if you want to run a test on a mobile viewport, narrow your browser width to mobile and refresh before starting the recording. This way you are capturing the correct elements that could change depending on responsive style rules.

Open your starting URL in Chrome Incognito. This is important so you’re not carrying cookies into the recording, which we won’t set up in the Synthetic test by default. If you workshop instructor does not have a custom URL, feel free to use https://online-boutique-eu.splunko11y.com or https://online-boutique-us.splunko11y.com, which are in the examples below.

Open the Chrome DevTools Recorder

Next, open the Developer Tools (in the new tab that was opened above) by pressing Ctrl + Shift + I on Windows or Cmd + Option + I on a Mac, then select Recorder from the top-level menu or the More tools flyout menu.

Open Recorder Open Recorder

Note

Site elements might change depending on viewport width. Before recording, set your browser window to the correct width for the test you want to create (Desktop, Tablet, or Mobile). Change the DevTools “dock side” to pop out as a separate window if it helps.

Create a new recording

With the Recorder panel open in the DevTools window. Click on the Create a new recording button to start.

Recorder Recorder

For the Recording Name use your initials to prefix the name of the recording e.g. <your initials> - <website name>. Click on Start Recording to start recording your actions.

Recording Name Recording Name

Now that we are recording, complete a few actions on the site. An example for our demo site is:

  • Click on Vintage Camera Lens
  • Click on Add to Cart
  • Click on Place Order
  • Click on End recording in the Recorder panel.

End Recording End Recording

Export the recording

Click on the Export button:

Export button Export button

Select JSON as the format, then click on Save

Export JSON Export JSON

Save JSON Save JSON

Congratulations! You have successfully created a recording using the Chrome DevTools Recorder. Next, we will use this recording to create a Real Browser Test in Splunk Synthetic Monitoring.


{
    "title": "RWC - Online Boutique",
    "steps": [
        {
            "type": "setViewport",
            "width": 1430,
            "height": 1016,
            "deviceScaleFactor": 1,
            "isMobile": false,
            "hasTouch": false,
            "isLandscape": false
        },
        {
            "type": "navigate",
            "url": "https://online-boutique-eu.splunko11y.com/",
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/",
                    "title": "Online Boutique"
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ],
                [
                    "xpath//html/body/main/div/div/div[2]/div[2]/div/a/div"
                ],
                [
                    "pierce/div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ]
            ],
            "offsetY": 170,
            "offsetX": 180,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/product/66VCHSJNUP",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/ADD TO CART"
                ],
                [
                    "button"
                ],
                [
                    "xpath//html/body/main/div[1]/div/div[2]/div/form/div/button"
                ],
                [
                    "pierce/button"
                ],
                [
                    "text/Add to Cart"
                ]
            ],
            "offsetY": 35.0078125,
            "offsetX": 46.4140625,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/PLACE ORDER"
                ],
                [
                    "div > div > div.py-3 button"
                ],
                [
                    "xpath//html/body/main/div/div/div[4]/div/form/div[4]/button"
                ],
                [
                    "pierce/div > div > div.py-3 button"
                ],
                [
                    "text/Place order"
                ]
            ],
            "offsetY": 29.8125,
            "offsetX": 66.8203125,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart/checkout",
                    "title": ""
                }
            ]
        }
    ]
}
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Create a Browser Test

In Splunk Observability Cloud, navigate to Synthetics and click on Add new test.

From the dropdown select Browser test.

Add new test Add new test

You will then be presented with the Browser test content configuration page.

New Test New Test

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Import JSON

To begin configuring our test, we need to import the JSON that we exported from the Chrome DevTools Recorder. To enable the Import button, we must first give our test a name e.g. [<your team name>] <your initials> - Online Boutique.

Import Import

Once the Import button is enabled, click on it and either drop the JSON file that you exported from the Chrome DevTools Recorder or upload the file.

Import JSON Import JSON

Once the JSON file has been uploaded, click on Continue to edit steps

Import Complete Import Complete

Edit Steps Edit Steps

Before we make any edits to the test, let’s first configure the settings, click on < Return to test

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Test settings

The simple settings allow you to configure the basics of the test:

  • Name: The name of the test (e.g. RWC - Online Boutique).
  • Details:
    • Locations: The locations where the test will run from.
    • Device: Emulate different devices and connection speeds. Also, the viewport will be adjusted to match the chosen device.
    • Frequency: How often the test will run.
    • Round-robin: If multiple locations are selected, the test will run from one location at a time, rather than all locations at once.
    • Active: Set the test to active or inactive.

![Return to Test]For this workshop, we will configure the locations that we wish to monitor from. Click in the Locations field and you will be presented with a list of global locations (over 50 in total).

Global Locations Global Locations

Select the following locations:

  • AWS - N. Virginia
  • AWS - London
  • AWS - Melbourne

Once complete, scroll down and click on Click on Submit to save the test.

The test will now be scheduled to run every 5 minutes from the 3 locations that we have selected. This does take a few minutes for the schedule to be created.

So while we wait for the test to be scheduled, click on Edit test so we can go through the Advanced settings.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Advanced Test Settings

Click on Advanced, these settings are optional and can be used to further configure the test.

Note

In the case of this workshop, we will not be using any of these settings; this is for informational purposes only.

Advanced Settings Advanced Settings

  • Security:
    • TLS/SSL validation: When activated, this feature is used to enforce the validation of expired, invalid hostname, or untrusted issuer on SSL/TLS certificates.
    • Authentication: Add credentials to authenticate with sites that require additional security protocols, for example from within a corporate network. By using concealed global variables in the Authentication field, you create an additional layer of security for your credentials and simplify the ability to share credentials across checks.
  • Custom Content:
    • Custom headers: Specify custom headers to send with each request. For example, you can add a header in your request to filter out requests from analytics on the back end by sending a specific header in the requests. You can also use custom headers to set cookies.
    • Cookies: Set cookies in the browser before the test starts. For example, to prevent a popup modal from randomly appearing and interfering with your test, you can set cookies. Any cookies that are set will apply to the domain of the starting URL of the check. Splunk Synthetics Monitoring uses the public suffix list to determine the domain.
    • Host overrides: Add host override rules to reroute requests from one host to another. For example, you can create a host override to test an existing production site against page resources loaded from a development site or a specific CDN edge node.

Next, we will edit the test steps to provide more meaningful names for each step.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Edit test steps

To edit the steps click on the + Edit steps or synthetic transactions button. From here, we are going to give meaningful names to each step.

For each step, we are going to give them a meaningful, readable name. That could look like:

  • Step 1 replace the text Go to URL with Go to Homepage
  • Step 2 enter the text Select Typewriter.
  • Step 3 enter Add to Cart.
  • Step 4 enter Place Order.

Step names Step names

Note

If you’d like, group the test steps into Transactions and edit the transaction names. This is especially useful for Single Page Apps (SPAs), where the resource waterfall is not split by URL. We can also create charts and alerts based on transactions.

Click < Return to test to return to the test configuration page and click Save to save the test.

You will be returned to the test dashboard where you will see test results start to appear.

Scatterplot Scatterplot

Congratulations! You have successfully created a Real Browser Test in Splunk Synthetic Monitoring. Next, we will look into a test result in more detail.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

View test results

1. Click into a spike or failure in your test run results.

image image

2. What can you learn about this test run? If it failed, use the error message, filmstrip, video replay, and waterfall to understand what happened.

Single test run result Single test run result

3. What do you see in the resources? Make sure to click through all of the page (or transaction) tabs.

image image

Workshop Question

Do you see anything interesting? Common issues to find and fix include: unexpected response codes, duplicate requests, forgotten third parties, large or slow files, and long gaps between requests.

Want to learn more about specific performance improvements? Google and Mozilla have great resources to help understand what goes into frontend performance as well as in-depth details of how to optimize it.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Frontend Dashboards

15 minutes  

Go to Dashboards and find the End User Experience dashboard group.

Click the three dots on the top right to open the dashboard menu, and select Save As, and include your team name and initials in the dashboard name.

Save to the dashboard group that matches your email address. Now you have your own copy of this dashboard to customize!

Dashboard save as Dashboard save as

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Frontend Dashboards

Copying and editing charts

We have some good charts in our dashboard, but let’s add a few more.

  1. Go to Dashboards by clicking the dashboard icon on the left side of the screen. Find the Browser app health dashboard and scroll to the Largest Contentful Paint (LCP) chart. Click the chart actions icon to open the flyout menu, and click “Copy” to add this chart to your clipboard. copy chart copy chart

  2. Now you can continue to add any other charts to your clipboard by clicking the “add to clipboard” icon. copy chart inline icon copy chart inline icon

  3. When you have collected the charts you want on your dashboard, click the “create” icon on the top right. You might need to reload the page if you were looking at charts in another browser tab. create icon create icon

  4. Click the “Paste charts” menu option. paste charts paste charts

Now you are able to resize and edit the charts as you’d like!

Bonus: edit chart data

  1. Click the chart actions icon and select Open to edit the chart. chart actions menu chart actions menu

  2. Remove the existing Test signal. edit the test signal edit the test signal

  3. Click Add filter and type test: *yourInitials*. This will use a wildcard match so that all of the tests you have created that contain your initials (or any string you decide) will be pulled into the chart. add filter button add filter button

  4. Click into the functions to see how adding and removing dimensions changes how the data is displayed. For example, if you want all of your test location data rolled up, remove that dimension from the function. test signal functions test signal functions

  5. Change the chart name and description as appropriate, and click “Save and close” to commit your changes or just “Close” to cancel your changes. chart close buttons chart close buttons

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Events in context with chart data

Seeing the visualization of our KPIs is great. What’s better? KPIs in context with events! Overlaying events on a dashboard can help us more quickly understand if an event like a deployment caused a change in metrics, for better or worse.

  1. Your instructor will push a condition change to the workshop application. Click the event marker on any of your dashboard charts to see more details. condition change event condition change event

  2. In the dimensions, we can see more details about this specific event. If we click the event record, we can mark for deletion if needed. event record event record event details event details

  3. We can also see a history of events in the event feed by clicking the icon on the top right of the screen, and selecting Event feed. event feed event feed

  4. Again, we can see details about recent events in this feed. event details event details

  5. We can also add new events in the GUI or via API. To add a new event in the GUI, click the New event button. GUI add event button GUI add event button

  6. Name your event with your team name, initials, and what kind of event it is (deployment, campaign start, etc). Choose a timestamp, or leave as-is to use the current time, and click “Create”. create event form create event form

  7. Now, we need to make sure our new event is overlaid in this dashboard. Wait a minute or so (refresh the page if needed) and then search for the event in the Event overlay field. overlay event overlay event

  8. If your event is within the dashboard time window, you should now see it overlaid in your charts. Click “Save” to make sure your event overlay is saved to your dashboard! save dashboard save dashboard

Keep in mind

Want to add context to that bug ticket, or show your manager how your change improved app performance? Seeing observability data in context with events not only helps with troubleshooting, but also helps us communicate with other teams.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Optimize End User Experiences

90 minutes   Author Sarah Ware

How can we use Splunk Observability to get insight into end user experience, and proactively test scenarios to improve that experience?

Sections:

  • Set up basic Synthetic tests to understand availability and performance ASAP
    • Uptime test
    • API test
    • Single page Browser test
  • Explore RUM to understand our real users
  • Write advanced Synthetics tests based on what we’ve learned about our users and what we need them to do
  • Customize dashboard charts to capture our KPIs, show trends, and show data in context of our events
  • Create Detectors to alert on our KPIs
Tip

Keep in mind throughout the workshop: how can I prioritize activities strategically to get the fastest time to value for my end users and for myself/ my developers?

Context

As a reminder, we need frontend performance monitoring to capture everything that goes into our end user experience. If we’re just monitoring the backend, we’re missing all of the other resources that are critical to our users’ success. Read What the Fastly Outage Can Teach Us About Observability for a real world example. Click the image below to zoom in. What goes into the front end What goes into the front end

References

Throughout this workshop, we will see references to resources to help further understand end user experience and how to optimize it. In addition to Splunk Docs for supported features and Lantern for tips and tricks, Google’s web.dev and Mozilla are great resources.

Remember that the specific libraries, platforms, and CDNs you use often also have their own specific resources. For example React, Wordpress, and Cloudflare all have their own tips to improve performance.

Author Sarah Ware   Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Optimize End User Experiences

Synthetics

Let’s quickly set up some tests in Synthetics to immediately start understanding our end user experience, without waiting for real users to interact with our app.

We can capture not only the performance and availability of our own apps and endpoints, but also those third parties we rely on any time of the day or night.

Tip

If you find that your tests are being bot-blocked, see the docs for tips on how to allow Synthetic testing. if you need to test something that is not accessible externally, see private location instructions.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Synthetics

Uptime Test

5 minutes  

Introduction

The simplest way to keep an eye on endpoint availability is with an Uptime test. This lightweight test can run internally or externally around the world, as frequently as every minute. Because this is the easiest (and cheapest!) test to set up, and because this is ideal for monitoring availability of your most critical enpoints and ports, let’s start here.

Pre-requisites

  • Publicly accessible HTTP(S) endpoint(s) to test
  • Access to Splunk Observability Cloud
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Uptime Test

Creating a test

  1. Open Synthetics Synthetics navigation item Synthetics navigation item

  2. Click the Add new test button on the right side of the screen, then select Uptime and HTTP test. image image

  3. Name your test with your team name (provided by your workshop instructor), your initials, and any other details you’d like to include, like geographic region.

  4. For now let’s test a GET request. Fill in the URL field. You can use one of your own, or one of ours like https://online-boutique-eu.splunko11y.com, https://online-boutique-us.splunko11y.com, or https://www.splunk.com.

  5. Click Try now to validate that the endpoint is accessible before the selected location before saving the test. Try now does not count against your subscription usage, so this is a good practice to make sure you’re not wasting real test runs on a misconfigured test. image image

Tip

A common reason for Try now to fail is that there is a non-2xx response code. If that is expected, add a Validation for the correct response code.

  1. Add any additional validations needed, for example: response code, response header, and response size. Advanced settings for test configuration Advanced settings for test configuration

  2. Add and remove any locations you’d like. Keep in mind where you expect your endpoint to be available.

  3. Change the frequency to test your more critical endpoints more often, up to one minute. image image

  4. Make sure “Round-robin” is on so the test will run from one location at a time, rather than from all locations at once.

    • If an endpoint is highly critical, think about if it is worth it to have all locations tested at the same time every single minute. If you have automations built in with a webhook from a detector, or if you have strict SLAs you need to track, this could be worth it to have as much coverage as possible. But if you are doing more manual investigation, or if this is a less critical endpoint, you could be wasting test runs that are executing while an issue is being investigated.
    • Remember that your license is based on the number of test runs per month. Turning Round-robin off will multiply the number of test runs by the number of locations you have selected.
  5. When you are ready for the test to start running, make sure “Active” is on, then scroll down and click Submit to save the test configuration. submit button submit button

Now the test will start running with your saved configuration. Take a water break, then we’ll look at the results!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Understanding results

  1. From the Synthetics landing page, click into a test to see its summary view and play with the Performance KPIs chart filters to see how you can slice and dice your data. This is a good place to get started understanding trends. Later, we will see what custom charts look like, so you can tailor dashboards to the KPIs you care about most. KPI chart filters KPI chart filters

    Workshop Question: Using the Performance KPIs chart

    What metrics are available? Is your data consistent across time and locations? Do certain locations run slower than others? Are there any spikes or failures?

  2. Click into a recent run either in the chart or in the table below. run results chart run results chart

  3. If there are failures, look at the response to see if you need to add a response code assertion (302 is a common one), if there is some authorization needed, or different request headers added. Here we have information about this particular test run including if it succeeded or failed, the location, timestamp, and duration in addition to the other Uptime test metrics. Click through to see the response, request, and connection info as well. uptime test result uptime test result If you need to edit the test for it to run successfully, click the test name in the top left breadcrumb on this run result page, then click Edit test on the top right of the test overview page. Remember to scroll down and click Submit to save your changes after editing the test configuration.

  4. In addition to the test running successfully, there are other metrics to measure the health of your endpoints. For example, Time to First Byte(TTFB) is a great indicator of performance, and you can optimize TTFB to improve end user experience.

  5. Go back to the test overview page and change the Performance KPIs chart to display First Byte time. Once the test has run for a long enough time, expanding the time frame will draw the data points as lines to better see trends and anomalies, like in the example below. Performance KPIs for Uptime Tests Performance KPIs for Uptime Tests

In the example above, we can see that TTFB varies consistently between locations. Knowing this, we can keep location in mind when reporting on metrics. We could also improve the experience, for example by serving users in those locations an endpoint hosted closer to them, which should reduce network latency. We can also see some slight variations in the results over time, but overall we already have a good idea of our baseline for this endpoint’s KPIs. When we have a baseline, we can alert on worsening metrics as well as visualize improvements.

Tip

We are not setting a detector on this test yet, to make sure it is running consistently and successfully. If you are testing a highly critical endpoint and want to be alerted on it ASAP (and have tolerance for potential alert noise), jump to Single Test Detectors.

Once you have your Uptime test running successfully, let’s move on to the next test type.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

API Test

5 minutes  

The API test provides a flexible way to check the functionality and performance of API endpoints. The shift toward API-first development has magnified the necessity to monitor the back-end services that provide your core front-end functionality.

Whether you’re interested in testing multi-step API interactions or you want to gain visibility into the performance of your endpoints, the API Test can help you accomplish your goals.

This excercise will walk through a multi-step test on the Spotify API. You can also use it as a reference to build tests on your own APIs or on those of your critical third parties.

API test result API test result

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of API Test

Global Variables

Global variables allow us to use stored strings in multiple tests, so we only need to update them in one place.

View the global variable that we’ll use to perform our API test. Click on Global Variables under the cog icon. The global variable named env.encoded_auth will be the one that we’ll use to build the spotify API transaction.

global variables global variables

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Create new API test

Create a new API test by clicking on the Add new test button and select API test from the dropdown. Name the test using your team name, your initials, and Spotify API e.g. [Daisy] RWC - Spotify API

new API test new API test

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Authentication Request

Click on + Add requests and enter the request step name e.g. Authenticate with Spotify API.

placeholder placeholder

Expand the Request section, from the drop-down change the request method to POST and enter the following URL:

https://accounts.spotify.com/api/token

In the Payload body section enter the following:

grant_type=client_credentials

Next, add two + Request headers with the following key/value pairings:

  • CONTENT-TYPE: application/x-www-form-urlencoded
  • AUTHORIZATION: Basic {{env.encoded_auth}}

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.access_token as access_token

This will parse the JSON payload that is received from the Spotify API, extract the access token and store it as a custom variable.

Add request payload token Add request payload token

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Search Request

Click on + Add Request to add the next step. Name the step Search for Tracks named “Up around the bend”.

Expand the Request section and change the request method to GET and enter the following URL:

https://api.spotify.com/v1/search?q=Up%20around%20the%20bend&type=track&offset=0&limit=5

Next, add two request headers with the following key/value pairings:

  • CONTENT-TYPE: application/json
  • AUTHORIZATION: Bearer {{custom.access_token}}
    • This uses the custom variable we created in the previous step!

Add search request Add search request

Expand the Validation section and add the following extraction:

  • Extract from Response body JSON $.tracks.items[0].id as track.id

Add search payload Add search payload

To validate the test before saving, scroll to the top and change the location as needed. Click Try now. See the docs for more information on the try now feature.

try now try now

When the validation is successful, click on < Return to test to return to the test configuration page. And then click Save to save the API test.

Extra credit

Have more time to work on this test? Take a look at the Response Body in one of your run results. What additional steps would make this test more thorough? Edit the test, and use the Try now feature to validate any changes you make before you save the test.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

View results

Wait for a few minutes for the test to provision and run. Once you see the test has run successfully, click on the run to view the results:

API test result API test result

Resources

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Single Page Browser Test

5 minutes  

We have started testing our endpoints, now let’s test the front end browser experience.

Starting with a single page browser test will let us capture how first- and third-party resources impact how our end users experience our browser-based site. It also allows us to start to understand our user experience metrics before introducing the complexity of multiple steps in one test.

A page where your users commonly “land” is a good choice to start with a single page test. This could be your site homepage, a section main page, or any other high-traffic URL that is important to you and your end users.

  1. Click Create new test and select Browser test Create new browser test Create new browser test

  2. Include your team name and initials in the test name. Add to the Name and Custom properties to describe the scope of the test (like Desktop for device type). Then click + Edit steps Browser test content fields Browser test content fields

  3. Change the transaction label (top left) and step name (on the right) to something readable that describes the step. Add the URL you’d like to test. Your workshop instructor can provide you with a URL as well. In the below example, the transaction is “Home” and the step name is “Go to homepage”. Transaction and step label Transaction and step label

  4. To validate the test, change the location as needed and click Try now. See the docs for more information on the try now feature. browser test try now buttons browser test try now buttons

  5. Wait for the test validation to complete. If the test validation failed, double check your URL and test location and try again. With Try now you can see what the result of the test will be if it were saved and run as-is. Try Now browser test results Try Now browser test results

  6. Click < Return to test to continue the configuration. Return to test button Return to test button

  7. Edit the locations you want to use, keeping in mind any regional rules you have for your site.

    Browser test details Browser test details

  8. You can edit the Device and Frequency or leave them at their default values for now. Click Submit at the bottom of the form to save the test and start running it.

    browser test submit button browser test submit button

Bonus Exercise

Have a few spare seconds? Copy this test and change just the title and device type, and save. Now you have visibility into the end user experience on another device and connection speed!

While our Synthetic tests are running, let’s see how RUM is instrumented to start getting data from our real users.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM

15 minutes  

With RUM instrumented, we will be able to better understand our end users, what they are doing, and what issues they are encountering.

This workshop walks through how our demo site is instrumented and how to interpret the data. If you already have a RUM license, this will help you understand how RUM works and how you can use it to optimize your end user experience.

Tip

Our Docs also contain guidance such as scenarios using Splunk RUM and demo applications to test out RUM for mobile apps.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of RUM

Overview

The aim of this Splunk Real User Monitoring (RUM) workshop is to let you:

  • Shop for items on the Online Boutique to create traffic, and create RUM User Sessions1 that you can view in the Splunk Observability Suite.
  • See an overview of the performance of all your application(s) in the Application Summary Dashboard
  • Examine the performance of a specific website with RUM metrics.

In order to reach this goal, we will use an online boutique to order various products. While shopping on the online boutique you will create what is called a User Session.

You may encounter some issues with this web site, and you will use Splunk RUM to identify the issues, so they can be resolved by the developers.

The workshop host will provide you with a URL for an online boutique store that has RUM enabled.

Each of these Online Boutiques are also being visited by a few synthetic users; this will allow us to generate more live data to be analyzed later.


  1. A RUM User session is a “recording” of a collection of user interactions on an application, basically collecting a website or app’s performance measured straight from the browser or Mobile App of the end user. To do this a small amount of JavaScript is embedded in each page. This script then collects data from each user as he or she explores the page, and transfers that data back for analysis. ↩︎

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM instrumentation in a browser app

  • Check the HEAD section of the Online-boutique webpage in your browser
  • Find the code that instruments RUM

1. Browse to the Online Boutique

Your workshop instructor will provide you with the Online Boutique URL that has RUM installed so that you can complete the next steps.

2. Inspecting the HTML source

The changes needed for RUM are placed in the <head> section of the hosts Web page. Right click to view the page source or to inspect the code. Below is an example of the <head> section with RUM:

Online Boutique Online Boutique

This code enables RUM Tracing, Session Replay, and Custom Events to better understand performance in the context of user workflows:

  • The first part is to indicate where to download the Splunk Open Telemetry Javascript file from: https://cdn.signalfx.com/o11y-gdi-rum/latest/splunk-otel-web.js (this can also be hosted locally if so required).
  • The next section defines the location where to send the traces to in the beacon url: {beaconUrl: "https://rum-ingest.eu0.signalfx.com/v1/rum"
  • The RUM Access Token: rumAuth: "<redacted>".
  • Identification tags app and environment to indentify in the SPLUNK RUM UI e.g. app: "online-boutique-us-store", environment: "online-boutique-us"} (these values will be different in your workshop)

The above lines 21 and 23-30 are all that is required to enable RUM on your website!

Lines 22 and 31-34 are optional if you want Session Replay instrumented.

Line 36-39 var tracer=Provider.getTracer('appModuleLoader'); will add a Custom Event for every page change, allowing you to better track your website conversions and usage. This may or may not be instrumented for this workshop.

Exercise

Time to shop! Take a minute to open the workshop store URL in as many browsers and devices as you’d like, shop around, add items to cart, checkout, and feel free to close the shopping browsers when you’re finished. Keep in mind this is a lightweight demo shop site, so don’t be alarmed if the cart doesn’t match the item you picked!

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM Landing Page

  • Visit the RUM landing page and and check the overview of the performance of all your RUM enabled applications with the Application Summary Dashboard (Both Mobile and Web based)

1. Visit the RUM Landing Page

Login into Splunk Observability. From the left side menu bar select RUM RUM-ico RUM-ico. This will bring you to your the RUM Landing Page.

The goal of this page is to give you in a single page, a clear indication of the health, performance and potential errors found in your application(s) and allow you to dive deeper into the information about your User Sessions collected from your web page/App. You will have a pane for each of your active RUM applications. (The view below is the default expanded view)

RUM-App-sum RUM-App-sum

If you have multiple applications, (which will be the case when every attendee is using their own ec2 instance for the RUM workshop), the pane view may be automatically reduced by collapsing the panes as shown below:

RUM-App-sum-collapsed RUM-App-sum-collapsed

You can expanded a condensed RUM Application Summary View to the full dashboard by clicking on the small browser RUM-browser RUM-browser or Mobile RUM-mobile RUM-mobileicon. (Depending on the type of application: Mobile or Browser based) on the left in front of the applications name, highlighted by the red arrow.

First find the right application to use for the workshop:

If you are participating in a stand alone RUM workshop, the workshop leader will tell you the name of the application to use, in the case of a combined workshop, it will follow the naming convention we used for IM and APM and use the ec2 node name as a unique id like jmcj-store as shown as the last app in the screenshot above.

2. Configure the RUM Application Summary Dashboard Header Section

RUM Application Summary Dashboard consists of 6 major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the Time Window you’re reviewing (You are looking at the past 15 minutes by default)
  • A drop down to select the Environment1 you want to look at. This allows you to focus on just the subset of applications belonging to that environment, or Select all to view all available.
  • A drop down list with the various Apps being monitored. You can use the one provided by the workshop host or select your own. This will focus you on just one application.
  • A drop down to select the Source, Browser or Mobile applications to view. For the Workshop leave All selected.
  • A hamburger menu located at the right of the header allowing you to configure some settings of your Splunk RUM application. (We will visit this in a later section).

RUM-SummaryHeader RUM-SummaryHeader

For the workshop lets do a deeper dive into the Application Summary screen in the next section: Check Health Browser Application


A common application deployment pattern is to have multiple, distinct application environments that don’t interact directly with each other but that are all being monitored by Splunk APM or RUM: for instance, quality assurance (QA) and production environments, or multiple distinct deployments in different datacenters, regions or cloud providers.


  1. A deployment environment is a distinct deployment of your system or application that allows you to set up configurations that don’t overlap with configurations in other deployments of the same application. Separate deployment environments are often used for different stages of the development process, such as development, staging, and production. ↩︎

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Check Browser Applications health at a glance

  • Get familiar with the UI and options available from this landing page
  • Identify Page Views/JavaScript Errors and Request/Errors in a single view
    Check the Web Vitals metrics and any Detector that has fired for in relation to your Browser Application

Application Summary Dashboard

1.Header Bar

As seen in the previous section the RUM Application Summary Dashboard consists of 5 major sections.
The first section is the selection header, where you can collapse the Pane via the RUM-browser RUM-browser Browser icon or the > in front of the application name, which is jmcj-store in the example below. It also provides access to the Application Overview page if you click the link with your application name which is jmcj-store in the example below.

Further, you can also open the Application Overview or App Health Dashboard via the triple dot trippleburger trippleburger menu on the right.

RUM-SummaryHeader RUM-SummaryHeader

For now, let’s look at the high level information we get on the application summary dashboard.

The RUM Application Summary Dashboard is focused on providing you with at a glance highlights of the status of your application.

2. Page Views / JavaScript Errors & Network Requests / Errors

The first section shows Page Views / JavaScript Errors, & Network Requests and Errors charts show the quantity and trend of these issues in your application. This could be Javascript errors, or failed network calls to back end services.

RUM-chart RUM-chart

In the example above you can see that there are no failed network calls in the Network chart, but in the Page View chart you can see that a number of pages do experience some errors. These are often not visible for regular users, but can seriously impact the performance of your web site.

You can see the count of the Page Views / Network Requests / Errors by hovering over the charts.

RUM-chart-clicked RUM-chart-clicked

3. JavaScript Errors

With the second section of the RUM Application Summary Dashboard we are showing you an overview of the JavaScript errors occurring in your application, along with a count of each error.

RUM-javascript RUM-javascript

In the example above you can see there are three JavaScript errors, one that appears 29 times in the selected time slot, and the other two each appear 12 times.

If you click on one of the errors a pop-out opens that will show a summary (below) of the errors over time, along with a Stack Trace of the JavaScript error, giving you an indication of where the problems occurred. (We will see this in more detail in one of the following sections)

RUM-javascript-chart RUM-javascript-chart

4. Web Vitals

The next section of the RUM Application Summary Dashboard is showing you Google’s Core Web Vitals, three metrics that are not only used by Google in its search ranking system, but also quantify end user experience in terms of loading, interactivity, and visual stability.

WEB-vitals WEB-vitals

As you can see our site is well behaved and scores Good for all three Metrics. These metrics can be used to identify the effect changes to your application have, and help you improve the performance of your site.

If you click on any of the Metrics shown in the Web Vitals pane you will be taken to the corresponding Tag Spotlight Dashboard. e.g. clicking on the Largest Contentful Paint (LCP) chartlet, you will be taken to a dashboard similar to the screen shot below, that gives you timeline and table views for how this metric has performed. This should allow you to spot trends and identify where the problem may be more common, such as an operating system, geolocation, or browser version.

WEB-vitals-tag WEB-vitals-tag

5. Most Recent Detectors

The final section of the RUM Application Summary Dashboard is focused on providing you an overview of recent detectors that have triggered for your application. We have created a detector for this screen shot but your pane will be empty for now. We will add some detectors to your site and make sure they are triggered in one of the next sections.

detectors detectors

In the screen shot you can see we have a critical alert for the RUM Aggregated View Detector, and a Count, how often this alert has triggered in the selected time window. If you happen to have an alert listed, you can click on the name of the Alert (that is shown as a blue link) and you will be taken to the Alert Overview page showing the details of the alert (Note: this will move you away from the current page, Please use the Back option of your browser to return to the overview page).

alert alert


Exercise

Please take a few minutes to experiment with the RUM Application Summary Dashboard and the underlying chart and dashboards before going on to the next section.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Metrics

  • See RUM Metrics and Session information in the RUM UI
  • See correlated APM traces in the RUM & APM UI

1. RUM Overview Pages

From your RUM Application Summary Dashboard you can see detailed information by opening the Application Overview Page via the tripple dot trippleburger trippleburger menu on the right by selecting Open Application Overview or by clicking the link with your application name which is jmcj-rum-app in the example below.

RUM-SummaryHeader RUM-SummaryHeader

This will take you to the RUM Application Overview Page screen as shown below.

RUM app overview with UX metrics RUM app overview with UX metrics

2. RUM Browser Overview

2.1. Header

The RUM UI consists of five major sections. The first is the selection header, where you can set/filter a number of options:

  • A drop down for the time window you’re reviewing (You are looking at the past 15 minutes in this case)
  • A drop down to select the Comparison window (You are comparing current performance on a rolling window - in this case compared to 1 hour ago)
  • A drop down with the available Environments to view
  • A drop down list with the Various Web apps
  • Optionally a drop down to select Browser or Mobile metrics (Might not be available in your workshop)

RUM-Header RUM-Header

2.2. UX Metrics

By default, RUM prioritizes the metrics that most directly reflect the experience of the end user.

Additional Tags

All of the dashboard charts allow us to compare trends over time, create detectors, and click through to further diagnose issues.

First, we see page load and route change information, which can help us understand if something unexpected is impacting user traffic trends.

Page load and route change charts Page load and route change charts

Next, Google has defined Core Web Vitals to quantify the user experience as measured by loading, interactivity, and visual stability. Splunk RUM builds in Google’s thresholds into the UI, so you can easily see if your metrics are in an acceptable range.

Core Web Vitals charts Core Web Vitals charts

  • Largest Contentful Paint (LCP), measures loading performance. How long does it take for the largest block of content in the viewport to load? To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID), measures interactivity. How long does it take to be able to interact with the app? To provide a good user experience, pages should have a FID of 100 milliseconds or less.
  • Cumulative Layout Shift (CLS), measures visual stability. How much does the content move around after the initial load? To provide a good user experience, pages should maintain a CLS of 0.1. or less.

Improving Web Vitals is a key component to optimizing your end user experience, so being able to quickly understand them and create detectors if they exceed a threshold is critical.

Google has some great resources if you want to learn more, for example the business impact of Core Web Vitals.

2.3. Front-end health

Common causes of frontend issues are javascript errors and long tasks, which can especially affect interactivity. Creating detectors on these indicators helps us investigate interactivity issues sooner than our users report it, allowing us to build workarounds or roll back related releases faster if needed. Learn more about optimizing long tasks for better end user experience!

JS error charts JS error charts Long task charts Long task charts

2.4. Back-end health

Common back-end issues affecting user experience are network issues and resource requests. In this example, we clearly see a spike in Time To First Byte that lines up with a resource request spike, so we already have a good starting place to investigate.

Back-end health charts Back-end health charts

  • Time To First Byte (TTFB), measures how long it takes for a client’s browser to receive the first byte of the response from the server. The longer it takes for the server to process the request and send a response, the slower your visitors’ browser is at displaying your page.
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Tags in the Tag Spotlight view

  • Look into the Metrics views for the various endpoints and use the Tags sent via the Tag spotlight for deeper analysis

1. Find an url for the Cart endpoint

From the RUM Overview page, please select the url for the Cart endpoint to dive deeper into the information available for this endpoint.

RUM-Cart2 RUM-Cart2

Once you have selected and clicked on the blue url, you will find yourself in the Tag Spotlight overview

RUM-Tag RUM-Tag

Here you will see all of the tags that have been sent to Splunk RUM as part of the RUM traces. The tags displayed will be relevant to the overview that you have selected. These are generic Tags created automatically when the Trace was sent, and additional Tags you have added to the trace as part of the configuration of your website.

Additional Tags

We are already sending two additional tags, you have seen them defined in the Beacon url that was added to your website: app: "[nodename]-store", environment: "[nodename]-workshop" in the first section of this workshop! You can add additional tags in a similar way.

In our example we have selected the Page Load view as shown here:

RUM-Header RUM-Header

You can select any of the following Tag views, each focused on a specific metric.

RUM-views RUM-views


2. Explore the information in the Tag Spotlight view

The Tag spotlight is designed to help you identify problems, either through the chart view,, where you may quickly identify outliers or via the TAGs.

In the Page Load view, if you look at the Browser, Browser Version & OS Name Tag views,you can see the various browser types and versions, as well as for the underlying OS.

This makes it easy to identify problems related to specific browser or OS versions, as they would be highlighted.

RUM-Tag2 RUM-Tag2

In the above example you can see that Firefox had the slowest response, various Browser versions ( Chrome) that have different response times and the slow response of the Android devices.

A further example are the regional Tags that you can use to identify problems related to ISP or locations etc. Here you should be able to find the location you have been using to access the Online Boutique. Drill down by selecting the town or country you are accessing the Online Boutique from by clicking on the name as shown below (City of Amsterdam):

RUM-click RUM-click

This will select only the sessions relevant to the city selected as shown below:

RUM-Adam RUM-Adam

By selecting the various Tag you build up a filter, you can see the current selection below

RUM-Adam RUM-Adam

To clear the filter and see every trace click on Clear All at the top right of the page.

If the overview page is empty or shows RUM-Adam RUM-Adam, no traces have been received in the selected timeslot. You need to increase the time window at the top left. You can start with the Last 12 hours for example.

You can then use your mouse to select the time slot you want like show in the view below and activate that time filter by clicking on the little spyglass icon.

RUM-time RUM-time

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Analyzing RUM Sessions

  • Dive into RUM Session information in the RUM UI
  • Identify Javascript errors in the Span of an user interaction

1. Drill down in the Sessions

After you have analyzed the information and drilled down via the Tag Spotlight to a subset of the traces, you can view the actual session as it was run by the end-user’s browser.

You do this by clicking on the link User Sessions as shown below:

RUM-Header RUM-Header

This will give you a list of sessions that matched both the time filter and the subset selected in the Tag Profile.

Select one by clicking on the session ID, It is a good idea to select one that has the longest duration (preferably over 700 ms).

RUM-Header RUM-Header

Once you have selected the session, you will be taken to the session details page. As you are selecting a specific action that is part of the session, you will likely arrive somewhere in the middle of the session, at the moment of the interaction.

You can see the URL that you selected earlier is where we are focusing on in the waterfall.

RUM-Session-Tag RUM-Session-Tag

Scroll down a little bit on the page, so you see the end of the operation as shown below.

RUM-Session-info RUM-Session-info

You can see that we have received a few Javascript Console errors that may not have been detected or visible to the end users. To examine these in more detail click on the middle one that says: *Cannot read properties of undefined (reading ‘Prcie’)

RUM-Session-info RUM-Session-info

This will cause the page to expand and show the Span detail for this interaction, It will contain a detailed error.stack you can pass on the developer to solve the issue. You may have noticed when buying in the Online Boutique that the final total always was $0.00.

RUM-Session-info RUM-Session-info

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Advanced Synthetics

30 minutes  

Introduction

This workshop walks you through using the Chrome DevTools Recorder to create a synthetic test on a Splunk demonstration environment or on your own public website.

The exported JSON from the Chrome DevTools Recorder will then be used to create a Splunk Synthetic Monitoring Real Browser Test.

Pre-requisites

  • Google Chrome Browser installed
  • Publicly browser-accessible URL
  • Access to Splunk Observability Cloud

Supporting resources

  1. Lantern: advanced Selectors for multi-step browser tests
  2. Chrome for Developers DevTools Tips
  3. web.dev Core Web Vitals reference
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Advanced Synthetics

Record a test

Write down a short user journey you want to test. Remember: smaller bites are easier to chew! In other words, get started with just a few steps. This is easier not only to create and maintain the test, but also to understand and act on the results. Test the essential features to your users, like a support contact form, login widget, or date picker.

Note

Record the test in the same type of viewport that you want to run it. For example, if you want to run a test on a mobile viewport, narrow your browser width to mobile and refresh before starting the recording. This way you are capturing the correct elements that could change depending on responsive style rules.

Open your starting URL in Chrome Incognito. This is important so you’re not carrying cookies into the recording, which we won’t set up in the Synthetic test by default. If you workshop instructor does not have a custom URL, feel free to use https://online-boutique-eu.splunko11y.com or https://online-boutique-us.splunko11y.com, which are in the examples below.

Open the Chrome DevTools Recorder

Next, open the Developer Tools (in the new tab that was opened above) by pressing Ctrl + Shift + I on Windows or Cmd + Option + I on a Mac, then select Recorder from the top-level menu or the More tools flyout menu.

Open Recorder Open Recorder

Note

Site elements might change depending on viewport width. Before recording, set your browser window to the correct width for the test you want to create (Desktop, Tablet, or Mobile). Change the DevTools “dock side” to pop out as a separate window if it helps.

Create a new recording

With the Recorder panel open in the DevTools window. Click on the Create a new recording button to start.

Recorder Recorder

For the Recording Name use your initials to prefix the name of the recording e.g. <your initials> - <website name>. Click on Start Recording to start recording your actions.

Recording Name Recording Name

Now that we are recording, complete a few actions on the site. An example for our demo site is:

  • Click on Vintage Camera Lens
  • Click on Add to Cart
  • Click on Place Order
  • Click on End recording in the Recorder panel.

End Recording End Recording

Export the recording

Click on the Export button:

Export button Export button

Select JSON as the format, then click on Save

Export JSON Export JSON

Save JSON Save JSON

Congratulations! You have successfully created a recording using the Chrome DevTools Recorder. Next, we will use this recording to create a Real Browser Test in Splunk Synthetic Monitoring.


{
    "title": "RWC - Online Boutique",
    "steps": [
        {
            "type": "setViewport",
            "width": 1430,
            "height": 1016,
            "deviceScaleFactor": 1,
            "isMobile": false,
            "hasTouch": false,
            "isLandscape": false
        },
        {
            "type": "navigate",
            "url": "https://online-boutique-eu.splunko11y.com/",
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/",
                    "title": "Online Boutique"
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ],
                [
                    "xpath//html/body/main/div/div/div[2]/div[2]/div/a/div"
                ],
                [
                    "pierce/div:nth-of-type(2) > div:nth-of-type(2) a > div"
                ]
            ],
            "offsetY": 170,
            "offsetX": 180,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/product/66VCHSJNUP",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/ADD TO CART"
                ],
                [
                    "button"
                ],
                [
                    "xpath//html/body/main/div[1]/div/div[2]/div/form/div/button"
                ],
                [
                    "pierce/button"
                ],
                [
                    "text/Add to Cart"
                ]
            ],
            "offsetY": 35.0078125,
            "offsetX": 46.4140625,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart",
                    "title": ""
                }
            ]
        },
        {
            "type": "click",
            "target": "main",
            "selectors": [
                [
                    "aria/PLACE ORDER"
                ],
                [
                    "div > div > div.py-3 button"
                ],
                [
                    "xpath//html/body/main/div/div/div[4]/div/form/div[4]/button"
                ],
                [
                    "pierce/div > div > div.py-3 button"
                ],
                [
                    "text/Place order"
                ]
            ],
            "offsetY": 29.8125,
            "offsetX": 66.8203125,
            "assertedEvents": [
                {
                    "type": "navigation",
                    "url": "https://online-boutique-eu.splunko11y.com/cart/checkout",
                    "title": ""
                }
            ]
        }
    ]
}
Last Modified By Snehangshu Karmakar   Jan 27, 2025

Create a Browser Test

In Splunk Observability Cloud, navigate to Synthetics and click on Add new test.

From the dropdown select Browser test.

Add new test Add new test

You will then be presented with the Browser test content configuration page.

New Test New Test

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Import JSON

To begin configuring our test, we need to import the JSON that we exported from the Chrome DevTools Recorder. To enable the Import button, we must first give our test a name e.g. [<your team name>] <your initials> - Online Boutique.

Browser edit form Browser edit form

Once the Import button is enabled, click on it and either drop the JSON file that you exported from the Chrome DevTools Recorder or upload the file.

Import JSON Import JSON

Once the JSON file has been uploaded, click on Continue to edit steps

Import complete message Import complete message

Before we make any edits to the test, let’s first configure the settings, click on < Return to test

Return to test button in the browser test editor Return to test button in the browser test editor

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Test settings

The simple settings allow you to configure the basics of the test:

  • Name: The name of the test (e.g. RWC - Online Boutique).
  • Details:
    • Locations: The locations where the test will run from.
    • Device: Emulate different devices and connection speeds. Also, the viewport will be adjusted to match the chosen device.
    • Frequency: How often the test will run.
    • Round-robin: If multiple locations are selected, the test will run from one location at a time, rather than all locations at once.
    • Active: Set the test to active or inactive.

For this workshop, we will configure the locations that we wish to monitor from. Click in the Locations field and you will be presented with a list of global locations (over 50 in total).

Global Locations Global Locations

Select the following locations:

  • AWS - N. Virginia
  • AWS - London
  • AWS - Melbourne

Once complete, scroll down and click on Click on Submit to save the test.

The test will now be scheduled to run every 5 minutes from the 3 locations that we have selected. This does take a few minutes for the schedule to be created.

So while we wait for the test to be scheduled, click on Edit test so we can go through the Advanced settings.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Advanced Test Settings

Click on Advanced, these settings are optional and can be used to further configure the test.

Note

In the case of this workshop, we will not be using any of these settings; this is for informational purposes only.

Advanced Settings Advanced Settings

  • Security:
    • TLS/SSL validation: When activated, this feature is used to enforce the validation of expired, invalid hostname, or untrusted issuer on SSL/TLS certificates.
    • Authentication: Add credentials to authenticate with sites that require additional security protocols, for example from within a corporate network. By using concealed global variables in the Authentication field, you create an additional layer of security for your credentials and simplify the ability to share credentials across checks.
  • Custom Content:
    • Custom headers: Specify custom headers to send with each request. For example, you can add a header in your request to filter out requests from analytics on the back end by sending a specific header in the requests. You can also use custom headers to set cookies.
    • Cookies: Set cookies in the browser before the test starts. For example, to prevent a popup modal from randomly appearing and interfering with your test, you can set cookies. Any cookies that are set will apply to the domain of the starting URL of the check. Splunk Synthetics Monitoring uses the public suffix list to determine the domain.
    • Host overrides: Add host override rules to reroute requests from one host to another. For example, you can create a host override to test an existing production site against page resources loaded from a development site or a specific CDN edge node.

See the Advanced Settings for Browser Tests section of the Docs for more information.

Next, we will edit the test steps to provide more meaningful names for each step.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Edit test steps

To edit the steps click on the + Edit steps or synthetic transactions button. From here, we are going to give meaningful names to each step.

For each step, we are going to give them a meaningful, readable name. That could look like:

  • Step 1 replace the text Go to URL with Go to Homepage
  • Step 2 enter the text Select Typewriter.
  • Step 3 enter Add to Cart.
  • Step 4 enter Place Order.

Editing browser test step names Editing browser test step names

Note

If you’d like, group the test steps into Transactions and edit the transaction names as seen above. This is especially useful for Single Page Apps (SPAs), where the resource waterfall is not split by URL. We can also create charts and alerts based on transactions.

Click < Return to test to return to the test configuration page and click Save to save the test.

You will be returned to the test dashboard where you will see test results start to appear.

Browser KPIs chart Browser KPIs chart

Congratulations! You have successfully created a Real Browser Test in Splunk Synthetic Monitoring. Next, we will look into a test result in more detail.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

View test results

1. Click into a spike or failure in your test run results.

Spike in the browser test performance KPIs chart Spike in the browser test performance KPIs chart

2. What can you learn about this test run? If it failed, use the error message, filmstrip, video replay, and waterfall to understand what happened.

Single test run result, with an error message and screenshots Single test run result, with an error message and screenshots

3. What do you see in the resources? Make sure to click through all of the page (or transaction) tabs.

resources in the browser test waterfall, with a long request highlighted resources in the browser test waterfall, with a long request highlighted

Workshop Question

Do you see anything interesting? Common issues to find and fix include: unexpected response codes, duplicate requests, forgotten third parties, large or slow files, and long gaps between requests.

Want to learn more about specific performance improvements? Google and Mozilla have great resources to help understand what goes into frontend performance as well as in-depth details of how to optimize it.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Frontend Dashboards

15 minutes  

Go to Dashboards and find the End User Experiences dashboard group.

Click the three dots on the top right to open the dashboard menu, and select Save As, and include your team name and initials in the dashboard name.

Save to the dashboard group that matches your email address. Now you have your own copy of this dashboard to customize!

Dashboard save as Dashboard save as

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Frontend Dashboards

Copying and editing charts

We have some good charts in our dashboard, but let’s add a few more.

  1. Go to Dashboards by clicking the dasboard icon on the left side of the screen. Find the Browser app health dashboard and scroll to the Largest Contentful Paint (LCP) chart. Click the chart actions icon to open the flyout menu, and click “Copy” to add this chart to your clipboard. copy chart copy chart

  2. Now you can continue to add any other charts to your clipboard by clicking the “add to clipboard” icon. copy chart inline icon copy chart inline icon

  3. When you have collected the charts you want on your dashboard, click the “create” icon on the top right. You might need to reload the page if you were looking at charts in another browser tab. create icon create icon

  4. Click the “Paste charts” menu option. paste charts paste charts

Now you are able to resize and edit the charts as you’d like!

Bonus: edit chart data

  1. Click the chart actions icon and select Open to edit the chart. chart actions menu chart actions menu

  2. Remove the existing Test signal. edit the test signal edit the test signal

  3. Click Add filter and type test: *yourInitials*. This will use a wildcard match so that all of the tests you have created that contain your initials (or any string you decide) will be pulled into the chart. add filter button add filter button

  4. Click into the functions to see how adding and removing dimensions changes how the data is displayed. For example, if you want all of your test location data rolled up, remove that dimension from the function. test signal functions test signal functions

  5. Change the chart name and description as appropriate, and click “Save and close” to commit your changes or just “Close” to cancel your changes. chart close buttons chart close buttons

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Events in context with chart data

Seeing the visualization of our KPIs is great. What’s better? KPIs in context with events! Overlaying events on a dashboard can help us more quickly understand if an event like a deployment caused a change in metrics, for better or worse.

  1. Your instructor will push a condition change to the workshop application. Click the event marker on any of your dashboard charts to see more details. condition change event condition change event

  2. In the dimensions, we can see more details about this specific event. If we click the event record, we can mark for deletion if needed. event record event record event details event details

  3. We can also see a history of events in the event feed by clicking the icon on the top right of the screen, and selecting Event feed. event feed event feed

  4. Again, we can see details about recent events in this feed. event details event details

  5. We can also add new events in the GUI or via API. To add a new event in the GUI, click the New event button. GUI add event button GUI add event button

  6. Name your event with your team name, initials, and what kind of event it is (deployment, campaign start, etc). Choose a timestamp, or leave as-is to use the current time, and click “Create”. create event form create event form

  7. Now, we need to make sure our new event is overlaid in this dashboard. Wait a minute or so (refresh the page if needed) and then search for the event in the Event overlay field. overlay event overlay event

  8. If your event is within the dashboard time window, you should now see it overlaid in your charts. Click “Save” to make sure your event overlay is saved to your dashboard! save dashboard save dashboard

Keep in mind

Want to add context to that bug ticket, or show your manager how your change improved app performance? Seeing observability data in context with events not only helps with troubleshooting, but also helps us communicate with other teams.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Detectors

20 minutes  

After we have a good understanding of our performance baseline, we can start to create Detectors so that we receive alerts when our KPIs are unexpected. If we create detectors before understanding our baseline, we run the risk of generating unnecessary alert noise.

For RUM and Synthetics, we will explore how to create detectors:

  1. on a single Synthetic test
  2. on a single KPI in RUM
  3. on a dashboard chart

For more Detector resources, please see our Observability docs, Lantern, and consider an Education course if you’d like to go more in depth with instructor guidance.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Detectors

Test Detectors

Why would we want a detector on a single Synthetic test? Some examples:

  • The endpoint, API transaction, or browser journey is highly critical
  • We have deployed code changes and want to know if the resulting KPI is or is not as we expect
  • We need to temporarily keep a close eye on a specific change we are testing and don’t want to create a lot of noise, and will disable the detector later
  • We want to know about unexpected issues before a real user encounters them
  1. On the test overview page, click Create Detector on the top right. create detector on a single synthetic test create detector on a single synthetic test

  2. Name the detector with your team name and your initials and LCP (the signal we will eventually use), so that the instructor can better keep track of everyone’s progress.

  3. Change the signal to First byte time. change the signal change the signal

  4. Change the alert details, and see how the chart to the right shows the amount of alert events under those conditions. This is where you can decide how much alert noise you want to generate, based on how much your team tolerates. Play with the settings to see how they affect estimated alert noise. alert noise preview alert noise preview

  5. Now, change the signal to Largest contentful paint. This is a key web vital related to the user experience as it relates to loading time. Change the threshold to 2500ms. It’s okay if there is no sample alert event in the detector preview.

  6. Scroll down in this window to see the notification options, including severity and recipients. notification options notification options

  7. Click the notifications link to customize the alert subject, message, tip, and runbook link. notification customization dialog notification customization dialog

  8. When you are happy with the amount of alert noise this detector would generate, click Activate. activate the detector activate the detector

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM Detectors

Let’s say we want to know about an issue in production without waiting for a ticket from our support center. This is where creating detectors in RUM will be helpful for us.

  1. Go to the RUM overview of our App. Scroll to the LCP chart, click the chart menu icon, and click Create Detector. RUM LCP chart with action menu flyout RUM LCP chart with action menu flyout

  2. Rename the detector to include your team name and initials, and change the scope of the detector to App so we are not limited to a single URL or page. Change the threshold and sensitivity until there is at least one alert event in the time frame. RUM alert details RUM alert details

  3. Change the alert severity and add a recipient if you’d like, and click Activate to save the Detector.

Exercise

Now, your workshop instructor will change something on the website. How do you find out about the issue, and how do you investigate it?

Tip

Wait a few minutes, and take a look at the online store homepage in your browser. How is the experience in an incognito browser window? How is it different when you refresh the page?

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Chart Detectors

With our custom dashboard charts, we can create detectors focussed directly on the data and conditions we care about. In building our charts, we also built signals that can trigger alerts.

Static detectors

For many KPIs, we have a static value in mind as a threshold.

  1. In your custom End User Experience dashboard, go to the “LCP - all tests” chart.

  2. Click the bell icon on the top right of the chart, and select “New detector from chart” new detector from chart new detector from chart

  3. Change the detector name to include your team name and initials, and adjust the alert details. Change the threshold to 2500 or 4000 and see how the alert noise preview changes. alert details alert details

  4. Change the severity, and add yourself as a recipient before you save this detector. Click Activate. activate the detector activate the detector

Advanced: Dynamic detectors

Sometimes we have metrics that vary naturally, so we want to create a more dynamic detector that isn’t limited by the static threshold we decide in the moment.

  1. To create dynamic detectors on your chart, click the link to the “old” detector wizard. click the link to open this detector in the old editor click the link to open this detector in the old editor

  2. Change the detector name to include your team name and initials, and Click Create alert rule name and create alert rule name and create alert rule

  3. Confirm the signal looks correct and proceed to Alert condition. alert signal details alert signal details

  4. Select the “Sudden Change” condition and proceed to Alert settings list of dynamic conditions list of dynamic conditions

  5. Play with the settings and see how the estimated alert noise is previewed in the chart above. Tune the settings, and change the advanced settings if you’d like, before proceeding to the Alert message. alert noise marked on chart based on settings alert noise marked on chart based on settings

  6. Customize the severity, runbook URL, any tips, and message payload before proceeding to add recipients. alert message options alert message options

  7. For the sake of this workshop, only add your own email address as recipient. This is where you would add other options like webhooks, ticketing systems, and Slack channels if it’s in your real environment. recipient options recipient options

  8. Finally, confirm the detector name before clicking Activate Alert Rule activate alert rule button activate alert rule button

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Summary

2 minutes  

In this workshop, we learned the following:

  • How to create simple synthetic tests so that we can quickly begin to understand the availability and performance of our application
  • How to understand what RUM shows us about the end user experience, including specific user sessions
  • How to write advanced synthetic browser tests to proactively test our most important user actions
  • How to visualize our frontend performance data in context with events on dashboards
  • How to set up detectors so we don’t have to wait to hear about issues from our end users
  • How all of the above, plus Splunk and Google’s resources, helps us optimize end user experience

There is a lot more we can do with front end performance monitoring. If you have extra time, be sure to play with the charts, detectors, and do some more synthetic testing. Remember our resources such as Lantern, Splunk Docs, and experiment with apps for Mobile RUM.

This is just the beginning! If you need more time to trial Splunk Observability, or have any other questions, reach out to a Splunk Expert.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Detectors

20 minutes  

After we have a good understanding of our performance baseline, we can start to create Detectors so that we receive alerts when our KPIs are unexpected. If we create detectors before understanding our baseline, we run the risk of generating unnecessary alert noise.

For RUM and Synthetics, we will explore how to create detectors:

  1. on a single Synthetic test
  2. on a single KPI in RUM
  3. on a dashboard chart

For more Detector resources, please see our Observability docs, Lantern, and consider an Education course if you’d like to go more in depth with instructor guidance.

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Subsections of Detectors

Test Detectors

Why would we want a detector on a single Synthetic test? Some examples:

  • The endpoint, API transaction, or browser journey is highly critical
  • We have deployed code changes and want to know if the resulting KPI is or is not as we expect
  • We need to temporarily keep a close eye on a specific change we are testing and don’t want to create a lot of noise, and will disable the detector later
  • We want to know about unexpected issues before a real user encounters them
  1. On the test overview page, click Create Detector on the top right. create detector on a single synthetic test create detector on a single synthetic test

  2. Name the detector with your team name and your initials and LCP (the signal we will eventually use), so that the instructor can better keep track of everyone’s progress.

  3. Change the signal to First byte time. change the signal change the signal

  4. Change the alert details, and see how the chart to the right shows the amount of alert events under those conditions. This is where you can decide how much alert noise you want to generate, based on how much your team tolerates. Play with the settings to see how they affect estimated alert noise. alert noise preview alert noise preview

  5. Now, change the signal to Largest contentful paint. This is a key web vital related to the user experience as it relates to loading time. Change the threshold to 2500ms. It’s okay if there is no sample alert event in the detector preview.

  6. Scroll down in this window to see the notification options, including severity and recipients. notification options notification options

  7. Click the notifications link to customize the alert subject, message, tip, and runbook link. notification customization dialog notification customization dialog

  8. When you are happy with the amount of alert noise this detector would generate, click Activate. activate the detector activate the detector

Last Modified By Snehangshu Karmakar   Jan 27, 2025

RUM Detectors

Let’s say we want to know about an issue in production without waiting for a ticket from our support center. This is where creating detectors in RUM will be helpful for us.

  1. Go to the RUM view of the Syncreator App. Scroll to the LCP chart, click the chart menu icon, and click Create Detector. chart options icon chart options icon
    create detector option create detector option

  2. Rename the test to include your team name and initials, and change the scope of the detector to App so we are not limited to a single URL or page. Change the threshold and sensitivity until there is at least one alert event in the time frame. alert details alert details

  3. Change the alert severity and add a recipient if you’d like, and click Activate to save the Detector. activate the RUM detector activate the RUM detector

Exercise

Now, your workshop instructor will change something on the website. How do you find out about the issue, and how do you investigate it?

Tip

Wait a few minutes, and take a look at the online store homepage in your browser. How is the experience in an incognito browser window? How is it different when you refresh the page?

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Chart Detectors

With our custom dashboard charts, we can create detectors focussed directly on the data and conditions we care about. In building our charts, we also built signals that can trigger alerts.

Static detectors

For many KPIs, we have a static value in mind as a threshold.

  1. In your custom End User Experience dashboard, go to the “LCP - all tests” chart.

  2. Click the bell icon on the top right of the chart, and select “New detector from chart” new detector from chart new detector from chart

  3. Change the detector name to include your team name and initials, and adjust the alert details. Change the threshold to 2500 or 4000 and see how the alert noise preview changes. alert details alert details

  4. Change the severity, and add yourself as a recipient before you save this detector. Click Activate. activate the detector activate the detector

Advanced: Dynamic detectors

Sometimes we have metrics that vary naturally, so we want to create a more dynamic detector that isn’t limited by the static threshold we decide in the moment.

  1. To create dynamic detectors on your chart, click the link to the “old” detector wizard. click the link to open this detector in the old editor click the link to open this detector in the old editor

  2. Change the detector name to include your team name and initials, and Click Create alert rule name and create alert rule name and create alert rule

  3. Confirm the signal looks correct and proceed to Alert condition. alert signal details alert signal details

  4. Select the “Sudden Change” condition and proceed to Alert settings list of dynamic conditions list of dynamic conditions

  5. Play with the settings and see how the estimated alert noise is previewed in the chart above. Tune the settings, and change the advanced settings if you’d like, before proceeding to the Alert message. alert noise marked on chart based on settings alert noise marked on chart based on settings

  6. Customize the severity, runbook URL, any tips, and message payload before proceeding to add recipients. alert message options alert message options

  7. For the sake of this workshop, only add your own email address as recipient. This is where you would add other options like webhooks, ticketing systems, and Slack channels if it’s in your real environment. recipient options recipient options

  8. Finally, confirm the detector name before clicking Activate Alert Rule activate alert rule button activate alert rule button

Last Modified By Snehangshu Karmakar   Jan 27, 2025

Summary

2 minutes  

In this workshop, we learned the following:

  • How to create simple synthetic tests so that we can quickly begin to understand the availability and performance of our application
  • How to understand what RUM shows us about the end user experience, including specific user sessions
  • How to write advanced synthetic browser tests to proactively test our most important user actions
  • How to visualize our frontend performance data in context with events on dashboards
  • How to set up detectors so we don’t have to wait to hear about issues from our end users
  • How all of the above, plus Splunk and Google’s resources, helps us optimize end user experience

There is a lot more we can do with front end performance monitoring. If you have extra time, be sure to play with the charts, detectors, and do some more synthetic testing. Remember our resources such as Lantern, Splunk Docs, and experiment with apps for Mobile RUM.

This is just the beginning! If you need more time to trial Splunk Observability, or have any other questions, reach out to a Splunk Expert.