Logs
20 minutesPersona
Remaining in your back-end developer role, you need to inspect the logs from your application to determine the root cause of the issue.
Remaining in your back-end developer role, you need to inspect the logs from your application to determine the root cause of the issue.
You’ve now navigated directly from an APM trace into Logs using the Related Content link. Logs is Splunk Observability Cloud’s no-code interface for exploring and analyzing log data.
The key advantage, just as with the RUM and APM integration, is that you’re viewing your logs in the context of your previous actions. In this case, that context includes the matching time range (1) from the trace and a filter (2) automatically applied to the trace_id.
This view will include all the log lines from all services that participated in the back-end transaction started by the end-user interaction with the Online Boutique.
Even in a small application such as our Online Boutique, the sheer amount of logs found can make it hard to see the specific log lines that matter to the actual incident we are investigating.
Before we go any further, let’s quickly recap what we have done so far and why we are here based on the 3 pillars of Observability:
| Metrics | Traces | Logs |
|---|---|---|
| Do I have a problem? | Where is the problem? | What is the problem? |
v350.9 and v350.10, and the error rate was 100% for v350.10.v350.10 caused multiple retries and a long delay in the response back from the Online Boutique checkout.Next, we will look at log entries in detail.
hostname: "paymentservice-xxxx" in case there is a rare error from a different service in the list too).Based on the message, what would you tell the development team to do to resolve the issue?
The development team needs to rebuild and deploy the container with a valid API Token or rollback to v350.9.
You have successfully used Splunk Observability Cloud to understand why you experienced a poor user experience whilst shopping at the Online Boutique. You used RUM, APM and logs to understand what happened in your service landscape and subsequently, found the underlying cause, all based on the 3 pillars of Observability, metrics, traces and logs.
You also learned how to use Splunk’s intelligent tagging and analysis with Tag Spotlight to detect patterns in your applications’ behavior and to use the full stack correlation power of Related Content to quickly move between the different components whilst keeping in context of the issue.
In the next part of the workshop, we will move from problem-finding mode into mitigation, prevention and process improvement mode.