Ingest Processor for Observability Cloud

Author Tim Hard

As infrastructure and application environments become exceedingly complex, the volume of data they generate continues to grow significantly. This increase in data volume and variety makes it challenging to gain actionable insights and can impact problem identification and troubleshooting efficiencies. Additionally, the cost of storing and accessing this data can skyrocket. Many data sources, particularly logs and events, provide critical visibility into system operations. However, in most cases, only a few details from these extensive logs are actually needed for effective monitoring and alerting.

Common Challenges:

  • Increasing complexity of infrastructure and application environments.
  • Significant growth in data volume generated by these environments.
  • Challenges in gaining actionable insights from large volumes of data.
  • High costs associated with storing and accessing extensive data.
  • Logs and events provide critical visibility but often contain only a few essential details.

To address these challenges, Splunk Ingest Processor provides a powerful new feature: the ability to convert log events into metrics. Metrics are more efficient to store and process, allowing for faster identification of issues, thereby reducing Mean Time to Detection (MTTD). When retaining the original log or event is necessary, they can be stored in cheaper storage solutions such as S3, reducing the overall cost of data ingestion and computation required for searching them.

Solution:

  • Convert log events into metrics where possible.
  • Retain original logs or events in cheaper storage solutions if needed.
  • Utilize federated search for accessing and analyzing retained logs.

Outcomes:

  • Metrics are more efficient to store and process.
  • Faster identification of problems, reducing Mean Time to Detection (MTTD).
  • Lower overall data ingestion and computation costs.
  • Enhanced monitoring efficiency and resource optimization.
  • Maintain high visibility into system operations with reduced operational costs.

In this workshop you’ll have the opportunity to get hands on with Ingest Processor and Splunk Observability Cloud to see how it can be used to address the challenges outlined above.

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▶️) cursor keys on your keyboard
Last Modified Jul 23, 2025

Subsections of Ingest Processor for Observability Cloud

Getting Started

During this technical Ingest Processor1 for Splunk Observability Cloud workshop you will have the opportunity to get hands-on with Ingest Processor in Splunk Enterprise Cloud.

To simplify the workshop modules, a pre-configured Splunk Enterprise Cloud instance is provided.

The instance is pre-configured with all the requirements for creating an Ingest Processor pipeline.

This workshop will introduce you to the benefits of using Ingest Processor to convert robust logs to metrics and send those metrics to Splunk Observability Cloud. By the end of these technical workshops, you will have a good understanding of some key features and capabilities of Ingest Processor in Splunk Enterprise Cloud and the value of using Splunk Observability Cloud as a destination within an Ingest Processor pipeline.

Here are the instructions on how to access your pre-configured Splunk Enterprise Cloud instance.

Splunk Ingest Processor Architecture Splunk Ingest Processor Architecture


  1. Ingest Processor is a data processing capability that works within your Splunk Cloud Platform deployment. Use the Ingest Processor to configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations. ↩︎

Last Modified Jul 23, 2025

Subsections of 1. Getting Started

How to connect to your workshop environment

  1. How to retrieve the URL for your Splunk Enterprise Cloud instances.
  2. How to access the Splunk Observability Cloud workshop organization.

1. Splunk Cloud Instances

There are three instances that will be used throughout this workshop which have already been provisioned for you:

  1. Splunk Enterprise Cloud
  2. Splunk Ingest Processor (SCS Tenant)
  3. Splunk Observability Cloud

The Splunk Enterprise Cloud and Ingest Processor instances are hosted in Splunk Show. If you were invited to the workshop, you should have received an email with an invite to the event in Splunk Show or a link to the event will have been provided at the beginning of the workshop.

Login to Splunk Show using your splunk.com credentials. You should see the event for this workshop. Open the event to see the instance details for your Splunk Cloud and Ingest Processor instances.

Note

Take note of the User Id provided in your Splunk Show event details. This number will be included in the sourcetype that you will use for searching and filtering the Kubernetes data. Because this is a shared environment only use the participant number provided so that other participants data is not effected.

Splunk Show Instance Information Splunk Show Instance Information

2. Splunk Observability Cloud Instances

You should have also received an email to access the Splunk Observability Cloud workshop organization (You may need to check your spam folder). If you have not received an email, let your workshop instructor know. To access the environment click the Join Now button.

Splunk Observability Cloud Invitation Splunk Observability Cloud Invitation

Important

If you access the event before the workshop start time, your instances may not be available yet. Don’t worry, they will be provided once the workshop begins.

Additionally, you have been invited to a Splunk Observability Cloud workshop organization. The invitation includes a link to the environment. If you don’t have a Splunk Observability Cloud account already, you will be asked to create one. If you already have one, you can log in to the instance and, you will see the workshop organization in your available organizations.

Last Modified Jul 23, 2025

How Ingest Processor Works

System architecture

The primary components of the Ingest Processor service include the Ingest Processor service and SPL2 pipelines that support data processing. The following diagram provides an overview of how the components of the Ingest Processor solution work together:

Splunk Ingest Processor Architecture Splunk Ingest Processor Architecture

Ingest Processor service

The Ingest Processor service is a cloud service hosted by Splunk. It is part of the data management experience, which is a set of services that fulfill a variety of data ingest and processing use cases.

You can use the Ingest Processor service to do the following:

  • Create and apply SPL2 pipelines that determine how each Ingest Processor processes and routes the data that it receives.
  • Define source types to identify the kind of data that you want to process and determine how the Ingest Processor breaks and merges that data into distinct events.
  • Create connections to the destinations that you want your Ingest Processor to send processed data to.
Pipelines

A pipeline is a set of data processing instructions written in SPL2. When you create a pipeline, you write a specialized SPL2 statement that specifies which data to process, how to process it, and where to send the results. When you apply a pipeline, the Ingest Processor uses those instructions to process all the data that it receives from data sources such as Splunk forwarders, HTTP clients, and logging agents.

Each pipeline selects and works with a subset of all the data that the Ingest Processor receives. For example, you can create a pipeline that selects events with the source type cisco_syslog from the incoming data, and then sends them to a specified index in Splunk Cloud Platform. This subset of selected data is called a partition. For more information, see Partitions.

The Ingest Processor solution supports only the commands and functions that are part of the IngestProcessor profile. For information about the specific SPL2 commands and functions that you can use to write pipelines for Ingest Processor, see Ingest Processor pipeline syntax. For a summary of how the IngestProcessor profile supports different commands and functions compared to other SPL2 profiles, see the following pages in the SPL2 Search Reference:

Last Modified Jul 23, 2025

Create an Ingest Pipeline

Scenario Overview

In this scenario you will be playing the role of a Splunk Admin responsible for managing your organizations Splunk Enterprise Cloud environment. You recently worked with an internal application team on instrumenting their Kubernetes environment with Splunk APM and Infrastructure monitoring using OpenTelemetry to monitor their critical microservice applications.

The logs from the Kubernetes environment are also being collected and sent to Splunk Enter Prize Cloud. These logs include:

  • Pod logs (application logs)
  • Kubernetes Events
  • Kubernetes Cluster Logs
    • Control Plane Node logs
    • Worker Node logs
    • Audit Logs

As a Splunk Admin you want to ensure that the data you are collecting is optimized, so it can be analyzed in the most efficient way possible. Taking this approach accelerates troubleshooting and ensures efficient license utilization.

One way to accomplish this is by using Ingest Processor to convert robust logs to metrics and use Splunk Observability Cloud as the destination for those metrics. Not only does this make collecting the logs more efficient, you have the added ability of using the newly created metrics in Splunk Observability which can then be correlated with Splunk APM data (traces) and Splunk Infrastructure Monitoring data providing additional troubleshooting context. Because Splunk Observability Cloud uses a streaming metrics pipeline, the metrics can be alerted on in real-time speeding up problem identification. Additionally, you can use the Metrics Pipeline Management functionality to further optimize the data by aggregating, dropping unnecessary fields, and archiving less important or unneeded metrics.

In the next step you’ll create an Ingest Processor Pipeline which will convert Kubernetes Audit Logs to metrics that will be sent to Observability Cloud.

Last Modified Jul 23, 2025

Subsections of 3. Create an Ingest Pipeline

Login to Splunk Cloud

In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization. Before getting started you will need to access the Splunk Cloud and Ingest Processor SCS Tenant environments provided in the Splunk Show event details.

Pre-requisite: Login to Splunk Enterprise Cloud

1. Open the Ingest Processor Cloud Stack URL provided in the Splunk Show event details.

Splunk Cloud Instance Details Splunk Cloud Instance Details

2. In the Connection info click on the Stack URL link to open your Splunk Cloud stack.

Splunk Cloud Connection Details Splunk Cloud Connection Details

3. Use the admin username and password to login to Splunk Cloud.

Splunk Cloud Login Splunk Cloud Login

4. After logging in, if prompted, accept the Terms of Service and click OK

Splunk Cloud Login Splunk Cloud Login

5. Navigate back to the Splunk Show event details and select the Ingest Processor SCS Tenant

Ingest Processor Connection Details Ingest Processor Connection Details

6. Click on the Console URL to access the Ingest Processor SCS Tenant

Note

Single Sign-On (SSO) Single Sign-on (SSO) is configured between the Splunk Data Management service (‘SCS Tenant’) and Splunk Cloud environments, so if you already logged in to your Splunk Cloud stack you should automatically be logged in to Splunk Data Management service. If you are prompted for credentials, use the credentials provided in the Splunk Cloud Stack on Splunk Show event (listed under the ‘Splunk Cloud Stack’ section.)

Last Modified Jul 23, 2025

Review Kubernetes Audit Logs

In this section you will review the Kubernetes Audit Logs that are being collected. You can see that the events are quite robust, which can make charting them inefficient. To address this, you will create an Ingest Pipeline in Ingest Processor that will convert these events to metrics that will be sent to Splunk Observability Cloud. This will allow you to chart the events much more efficiently and take advantage of the real-time streaming metrics in Splunk Observability Cloud.

Exercise: Create Ingest Pipeline

1. Open your Ingest Processor Cloud Stack instance using the URL provided in the Splunk Show workshop details.

2. Navigate to AppsSearch and Reporting

Search and Reporting Search and Reporting

3. In the search bar, enter the following SPL search string.

Note

Make sure to replace USER_ID with the User ID provided in your Splunk Show instance information.

### Replace USER_ID with the User ID provided in your Splunk Show instance information
index=main sourcetype="kube:apiserver:audit:USER_ID"

4. Press Enter or click the green magnifying glass to run the search.

Kubernetes Audit Log Kubernetes Audit Log

Note

You should now see the Kubernetes Audit Logs for your environment. Notice that the events are fairly robust. Explore the available fields and start to think about what information would be good candidates for metrics and dimensions. Ask yourself: What fields would I like to chart, and how would I like to be able to filter, group, or split those fields?

Last Modified Jul 23, 2025

Create an Ingest Pipeline

In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization.

Exercise: Create Ingest Pipeline

1. Open the Ingest Processor SCS Tenant using the connection details provided in the Splunk Show event.

Launch Splunk Cloud Platform Launch Splunk Cloud Platform

Note

When you open the Ingest Processor SCS Tenant, if you are taken to a welcome page, click on Launch under Splunk Cloud Platform to be taken to the Data Management page where you will configure the Ingest Pipeline.

Launch Splunk Cloud Platform Launch Splunk Cloud Platform

2. From the Splunk Data Management console select PipelinesNew pipelineIngest Processor pipeline.

New Ingest Processor Pipeline New Ingest Processor Pipeline

3. In the Get started step of the Ingest Processor configuration page select Blank Pipeline and click Next.

Blank Ingest Processor Pipeline Blank Ingest Processor Pipeline

4. In the Define your pipeline’s partition step of the Ingest Processor configuration page select Partition by sourcetype. Select the = equals Operator and enter kube:apiserver:audit:USER_ID (Be sure to replace USER_ID with the User ID you were assigned) for the value. Click Apply.

Add Partition Add Partition

5. Click Next

6. In the Add sample data step of the Ingest Processor configuration page select Capture new snapshot. Enter k8s_audit_USER_ID (Be sure to replace USER_ID with the User ID you were assigned) for the Snapshot name and click Capture.

Capture Snapshot Capture Snapshot

7. Make sure your newly created snapshot (k8s_audit_USER_ID) is selected and then click Next.

Configure Snapshot Sourcetype Configure Snapshot Sourcetype

8. In the Select a metrics destination step of the Ingest Processor configuration page select show_o11y_org. Click Next.

Metrics Destination Metrics Destination

9. In the Select a data destination step of the Ingest Processor configuration page select splunk_indexer. Under Specify how you want your events to be routed to an index select Default. Click Done.

Event Routing Event Routing

10. In the Pipeline search field replace the default search with the following.

Note

Replace UNIQUE_FIELD in the metric name with a unique value (such as your initials) which will be used to identify your metric in Observability Cloud.

/*A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination".*/
/* Import logs_to_metrics */
import logs_to_metrics from /splunk/ingest/commands
$pipeline =
| from $source
| thru [
        //define the metric name, type, and value for the Kubernetes Events
        //
        // REPLACE UNIQUE_FIELD WITH YOUR INITIALS
        //
        | logs_to_metrics name="k8s_audit_UNIQUE_FIELD" metrictype="counter" value=1 time=_time
        | into $metrics_destination
    ]
| eval index = "kube_logs"
| into $destination;
New to SPL2?

Here is a breakdown of what the SPL2 query is doing:

  • First, you are importing the built-in logs_to_metrics command which will be used to convert the kubernetes events to metrics.
  • You’re using the source data, which you can see on the right is any event from the kube:apiserver:audit sourcetype.
  • Now, you use the thru command which writes the source dataset to the following command, in this case logs_to_metrics.
  • You can see that the metric name (k8s_audit), metric type (counter), value, and timestamp are all provided for the metric. You’re using a value of 1 for this metric because we want to count the number of times the event occurs.
  • Next, you choose the destination for the metric using the into $metrics_destintation command, which is our Splunk Observability Cloud organization
  • Finally, you can send the raw log events to another destination, in this case another index, so they are retained if we ever need to access them.

11. In the upper-right corner click the Preview button Preview Button Preview Button or press CTRL+Enter (CMD+Enter on Mac). From the Previewing $pipeline dropdown select $metrics_destination. Confirm you are seeing a preview of the metrics that will be sent to Splunk Observability Cloud.

Preview Pipeline Preview Pipeline

12. In the upper-right corner click the Save pipeline button Save Pipeline Button Save Pipeline Button. Enter Kubernetes Audit Logs2Metrics USER_ID for your pipeline name and click Save.

Save Pipeline Dialog Save Pipeline Dialog

13. After clicking save you will be asked if you would like to apply the newly created pipeline. Click Yes, apply.

Apply Pipeline Dialog Apply Pipeline Dialog

Note

The Ingest Pipeline should now be sending metrics to Splunk Observability Cloud. Keep this tab open as it will be used it again in the next section.

In the next step you’ll confirm the pipeline is working by viewing the metrics you just created in Splunk Observability Cloud.

Last Modified Jul 23, 2025

Confirm Metrics in Observability Cloud

Now that an Ingest Pipeline has been configured to convert Kubernetes Audit Logs into metrics and send them to Splunk Observability Cloud the metrics should be available. To confirm the metrics are being collected complete the following steps:

Exercise: Confirm Metrics in Splunk Observability Cloud

1. Login to the Splunk Observability Cloud organization you were invited for the workshop. In the upper-right corner, click the + Icon → Chart to create a new chart.

Create New Chart Create New Chart

2. In the Plot Editor of the newly created chart enter the metric name you used while configuring the Ingest Pipeline.

Review Metric Review Metric

Info

You should see the metric you created in the Ingest Pipeline. Keep this tab open as it will be used again in the next section.

In the next step you will update the ingest pipeline to add dimensions to the metric, so you have additional context for alerting and troubleshooting.

Last Modified Jul 23, 2025

Update Pipeline and Visualize Metrics

Context Matters

In the previous section, you reviewed the raw Kubernetes audit logs and created an Ingest Processor Pipeline to convert them to metrics and send those metrics to Splunk Observability Cloud.

Now that this pipeline is defined we are collecting the new metrics in Splunk Observability Cloud. This is a great start; however, you will only see a single metric showing the total number of Kubernetes audit events for a given time period. It would be much more valuable to add dimensions so that you can split the metric by the event type, user, response status, and so on.

In this section you will update the Ingest Processor Pipeline to include additional dimensions from the Kubernetes audit logs to the metrics that are being collected. This will allow you to further filter, group, visualize, and alert on specific aspects of the audit logs. After updating the metric, you will create a new dashboard showing the status of the different types of actions associated with the logs.

Last Modified Jul 23, 2025

Subsections of 4. Update Pipeline and Visualize Metrics

Update Ingest Pipeline

Exercise: Update Ingest Pipeline

1. Navigate back to the configuration page for the Ingest Pipeline you created in the previous step.

Ingest Pipeline Ingest Pipeline

2. To add dimensions to the metric from the raw Kubernetes audit logs update the SPL2 query you created for the pipeline by replacing the logs_to_metrics portion of the query with the following:

Note

Be sure to update the metric name field (name="k8s_audit_UNIQUE_FIELD") to the name you provided in the original pipeline

| logs_to_metrics name="k8s_audit_UNIQUE_FIELD" metrictype="counter" value=1 time=_time dimensions={"level": _raw.level, "response_status": _raw.responseStatus.code, "namespace": _raw.objectRef.namespace, "resource": _raw.objectRef.resource, "user": _raw.user.username, "action": _raw.verb}
Note

Using the dimensions field in the SPL2 query you can add dimensions from the raw events to the metrics that will be sent to Splunk Observability Cloud. In this case you are adding the event response status, namespace, Kubernetes resource, user, and verb (action that was performed). These dimensions can be used to create more granular dashboards and alerts.

You should consider adding any common tags across your services so that you can take advantage of context propagation and related content in Splunk Observability Cloud.

The updated pipeline should now be the following:

/*A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination".*/
/* Import logs_to_metrics */
import logs_to_metrics from /splunk/ingest/commands
$pipeline =
| from $source
| thru [
        //define the metric name, type, and value for the Kubernetes Events
        //
        // REPLACE UNIQUE_FIELD WITH YOUR INITIALS
        //
        | logs_to_metrics name="k8s_audit_UNIQUE_FIELD" metrictype="counter" value=1 time=_time dimensions={"level": _raw.level, "response_status": _raw.responseStatus.code, "namespace": _raw.objectRef.namespace, "resource": _raw.objectRef.resource, "user": _raw.user.username, "action": _raw.verb}
        | into $metrics_destination
    ]
| eval index = "kube_logs"
| into $destination;

3. In the upper-right corner click the Preview button Preview Button Preview Button or press CTRL+Enter (CMD+Enter on Mac). From the Previewing $pipeline dropdown select $metrics_destination. Confirm you are seeing a preview of the metrics that will be sent to Splunk Observability Cloud.

Ingest Pipeline Dimensions Ingest Pipeline Dimensions

4. Confirm you are seeing the dimensions in the dimensions column of the preview table. You can view the entire dimensions object by clicking into the table.

Ingest Pipeline Dimensions Review Ingest Pipeline Dimensions Review

5. In the upper-right corner click the Save pipeline button Save Pipeline Button Save Pipeline Button. On the “You are editing an active pipeline modal” click Save.

Save Updated Pipeline Save Updated Pipeline

Note

Because this pipeline is already active, the changes you made will take effect immediately. Your metric should now be split into multiple metric timeseries using the dimensions you added.

In the next step you will create a visualization using different dimensions from the Kubernetes audit events.

Last Modified Jul 23, 2025

Visualize Kubernetes Audit Event Metrics

Now that your metric has dimensions you will create a chart showing the health of different Kubernetes actions using the verb dimension from the events.

Exercise: Visualize Kubernetes Audit Event Metrics

1. If you closed the chart you created in the previous section, in the upper-right corner, click the + Icon → Chart to create a new chart.

Create New Chart Create New Chart

2. In the Plot Editor of the newly created chart enter k8s_audit* in the metric name field. You will use a wildcard here so that you can see all the metrics that are being ingested.

Review Metric Review Metric

3. Notice the change from one to many metrics, which is when you updated the pipeline to include the dimensions. Now that we have this metric available, let’s adjust the chart to show us if any of our actions have errors associated with them.

Metric Timeseries Metric Timeseries

First you’ll filter the Kubernetes events to only those that were not successful using the HTTP response code which is available in the response_status field. We only want events that have a response code of 409, which indicates that there was a conflict (for example a trying to create a resource that already exists) or 503, which indicates that the API was unresponsive for the request.

4. In the plot editor of your chart click the Add filter, use response_status for the field and select 409.0 and 503.0 for the values.

Next, you’ll add a function to the chart which will calculate the total number of events grouped by the resource, action, and response status. This will allow us to see exactly which actions and the associated resources had errors. Now we are only looking at Kubernetes events that were not successful.

5. Click Add analyticsSumSum:Aggregation and add resource, action, and response_status in the Group by field.

Add Metric Filters Add Metric Filters

6. Using the chart type along the top buttons, change the chart to a heatmap. Next to the Plot editor, click Chart options. In the Group by section select response_status then action. Change the Color threshold from Auto to Fixed. Click the blue + button to add another threshold. Change the Down arrow to Yellow, the Middle to orange. Leave the Up arrow as red. Enter 5 for the middle threshold and 20 for the upper threshold.

Configure Thresholds Configure Thresholds

7. In the upper right corner of the chart click the blue Save as… Preview Button Preview Button button. Enter a name for your chart (For Example: Kubernetes Audit Logs - Conflicts and Failures).

Chart Name Chart Name

8. On the Choose a dashboard select New dashboard.

New Dashboard New Dashboard

9. Enter a name for your dashboard that includes your initials, so you can easily find it later. Click Save.

New Dashboard Name New Dashboard Name

10. Make sure the new dashboard you just created is selected and click Ok.

Save New Dashboard Save New Dashboard

You should now be taken to your new Kubernetes Audit Events dashboard with the chart you created. You can add new charts from other metrics in your environment, such as application errors and response times from the applications running in the Kubernetes cluster, or other Kubernetes metrics such as pod phase, pod memory utilization, etc. giving you a correlated view of your Kubernetes environment from cluster events to application health.

Audit Dashboard Audit Dashboard

Make a copy of this chart using the three dots ... in the top right of your chart’s visualization box

Copy chart button Copy chart button

Paste into the same dashboard you’ve been working in using the + icon in the top right of the UI

Paste chart into dashboard Paste chart into dashboard

Click into your pasted chart and change the visualization to a Column chart.

Change to column chart visualization Change to column chart visualization

Change SUM to just resource, namespace (our filters filter down to just problem codes)

Group chart by resource and namespace Group chart by resource and namespace

In Chart options change title to Kubernetes Audit Logs - Conflicts by Namespace

Change chart title Change chart title

Click Save and close

Save and close chart Save and close chart

Exercise: Create a detector based on Kubernetes Audit Logs

On your Conflicts by Namespace chart click the little bell icon and New detector from chart

Bell icon to create detector Bell icon to create detector

Choose a name and click Create alert rule

Enter name for alert rule Enter name for alert rule

For Alert condition click Static Threshold and click Proceed to Alert Settings

Select static threshold condition Select static threshold condition

Enter a Threshold of 20

Enter threshold value Enter threshold value

We wont choose any recipients for this alert so click into Activate and choose Activate Alert Rule and Save

Activate alert rule and save Activate alert rule and save

Click Save one final time in the top right to save your detector

Final save for detector Final save for detector

Navigate back to your dashboard and you will see a detector associated with the chart denoted by a lit up bell icon on the chart

Detector bell icon on chart Detector bell icon on chart

Exercise: Visualize your time series data in Splunk Cloud - Dashboard Studio

Now that we have our time series metrics ingested to the Splunk Observability Cloud data store we can easily visualize these time series metrics in Splunk Cloud!

In your Splunk Cloud instance browse to Dashboards and select Create New Dashboard

Create new dashboard in Splunk Cloud Create new dashboard in Splunk Cloud

Choose a Dashboard title, permissions and Dashboard Studio along with any Layout Mode. Click Create.

Dashboard title and layout options Dashboard title and layout options

In Dashboard Studio click the chart icon and choose Column

Select column chart in Dashboard Studio Select column chart in Dashboard Studio

In Select data source choose Create splunk observability cloud metric search

Choose observability cloud metric search as data source Choose observability cloud metric search as data source

Choose a name for your new datasource and click the Content Import link under Search for metric or metadata

Copy and paste the URL for your chart into the Content URL field

Paste chart URL and import Paste chart URL and import

Click Import

Chart imported to dashboard Chart imported to dashboard

Chart visible in dashboard Chart visible in dashboard

Size your chart to your dashboard

Resize chart in dashboard Resize chart in dashboard

Expand Interactions in the right side of your charts Configuration and click Add Interaction

Expand interactions and add interaction Expand interactions and add interaction

Copy the URL from your dashboard in Splunk Observability

Apply interaction settings Apply interaction settings

In On click choose Link to custom URL and add the URL for your dashboard in Splunk Observability Cloud for easy navigation back to the source data. Also choose Open in new tab for friendly navigation.

Interaction added Interaction added

Click Save in the top right to save your Dashboard.

Save dashboard in Splunk Cloud Save dashboard in Splunk Cloud

Highlight and click a Column or name in your chart

Click column or name in chart Click column or name in chart

You will be told you are navigating back to Splunk Observability. Click Continue

Continue navigation to Splunk Observability Continue navigation to Splunk Observability

You’ve now navigated back to your corresponding Splunk Observability dashboard from Splunk Cloud.

Last Modified Aug 6, 2025

Conclusion

In this workshop, you walked through the entire process of optimizing Kubernetes log management by converting detailed log events into actionable metrics using Splunk Ingest Pipelines. You started by defining a pipeline that efficiently converts Kubernetes audit logs into metrics, drastically reducing the data volume while retaining critical information. You then ensured the raw log events were securely stored in S3 for long-term retention and deeper analysis.

Kubernetes Audit Event Kubernetes Audit Event

Next, you demonstrated how to enhance these metrics by adding key dimensions from the raw events, enabling us to drill down into specific actions and resources. you created a chart that filtered the metrics to focus on errors, breaking them out by resource and action. This allowed us to pinpoint exactly where issues were occurring in real-time.

Ingest Pipeline Ingest Pipeline

The real-time architecture of Splunk Observability Cloud means that these metrics can trigger alerts the moment an issue is detected, significantly reducing the Mean Time to Detection (MTTD). Additionally, you showed how this chart can be easily saved to new or existing dashboards, ensuring ongoing visibility and monitoring of critical metrics.

Audit Dashboard Audit Dashboard

The value behind this approach is clear: by converting logs to metrics using Ingest Processor, you not only streamline data processing and reduce storage costs but also gain the ability to monitor and respond to issues in real-time using Splunk Observability Cloud. This results in faster problem resolution, improved system reliability, and more efficient resource utilization, all while maintaining the ability to retain and access the original logs for compliance or deeper analysis.

Happy Splunking!

Dancing Buttercup Dancing Buttercup