Create an Ingest Pipeline

Scenario Overview

In this scenario you will be playing the role of a Splunk Admin responsible for managing your organizations Splunk Enterprise Cloud environment. You recently worked with an internal application team on instrumenting their Kubernetes environment with Splunk APM and Infrastructure monitoring using OpenTelemetry to monitor their critical microservice applications.

The logs from the Kubernetes environment are also being collected and sent to Splunk Enter Prize Cloud. These logs include:

  • Pod logs (application logs)
  • Kubernetes Events
  • Kubernetes Cluster Logs
    • Control Plane Node logs
    • Worker Node logs
    • Audit Logs

As a Splunk Admin you want to ensure that the data you are collecting is optimized, so it can be analyzed in the most efficient way possible. Taking this approach accelerates troubleshooting and ensures efficient license utilization.

One way to accomplish this is by using Ingest Processor to convert robust logs to metrics and use Splunk Observability Cloud as the destination for those metrics. Not only does this make collecting the logs more efficient, you have the added ability of using the newly created metrics in Splunk Observability which can then be correlated with Splunk APM data (traces) and Splunk Infrastructure Monitoring data providing additional troubleshooting context. Because Splunk Observability Cloud uses a streaming metrics pipeline, the metrics can be alerted on in real-time speeding up problem identification. Additionally, you can use the Metrics Pipeline Management functionality to further optimize the data by aggregating, dropping unnecessary fields, and archiving less important or unneeded metrics.

In the next step you’ll create an Ingest Processor Pipeline which will convert Kubernetes Audit Logs to metrics that will be sent to Observability Cloud.

Last Modified Jul 23, 2025

Subsections of 3. Create an Ingest Pipeline

Login to Splunk Cloud

In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization. Before getting started you will need to access the Splunk Cloud and Ingest Processor SCS Tenant environments provided in the Splunk Show event details.

Pre-requisite: Login to Splunk Enterprise Cloud

1. Open the Ingest Processor Cloud Stack URL provided in the Splunk Show event details.

Splunk Cloud Instance Details Splunk Cloud Instance Details

2. In the Connection info click on the Stack URL link to open your Splunk Cloud stack.

Splunk Cloud Connection Details Splunk Cloud Connection Details

3. Use the admin username and password to login to Splunk Cloud.

Splunk Cloud Login Splunk Cloud Login

4. After logging in, if prompted, accept the Terms of Service and click OK

Splunk Cloud Login Splunk Cloud Login

5. Navigate back to the Splunk Show event details and select the Ingest Processor SCS Tenant

Ingest Processor Connection Details Ingest Processor Connection Details

6. Click on the Console URL to access the Ingest Processor SCS Tenant

Note

Single Sign-On (SSO) Single Sign-on (SSO) is configured between the Splunk Data Management service (‘SCS Tenant’) and Splunk Cloud environments, so if you already logged in to your Splunk Cloud stack you should automatically be logged in to Splunk Data Management service. If you are prompted for credentials, use the credentials provided in the Splunk Cloud Stack on Splunk Show event (listed under the ‘Splunk Cloud Stack’ section.)

Last Modified Jul 23, 2025

Review Kubernetes Audit Logs

In this section you will review the Kubernetes Audit Logs that are being collected. You can see that the events are quite robust, which can make charting them inefficient. To address this, you will create an Ingest Pipeline in Ingest Processor that will convert these events to metrics that will be sent to Splunk Observability Cloud. This will allow you to chart the events much more efficiently and take advantage of the real-time streaming metrics in Splunk Observability Cloud.

Exercise: Create Ingest Pipeline

1. Open your Ingest Processor Cloud Stack instance using the URL provided in the Splunk Show workshop details.

2. Navigate to AppsSearch and Reporting

Search and Reporting Search and Reporting

3. In the search bar, enter the following SPL search string.

Note

Make sure to replace USER_ID with the User ID provided in your Splunk Show instance information.

### Replace USER_ID with the User ID provided in your Splunk Show instance information
index=main sourcetype="kube:apiserver:audit:USER_ID"

4. Press Enter or click the green magnifying glass to run the search.

Kubernetes Audit Log Kubernetes Audit Log

Note

You should now see the Kubernetes Audit Logs for your environment. Notice that the events are fairly robust. Explore the available fields and start to think about what information would be good candidates for metrics and dimensions. Ask yourself: What fields would I like to chart, and how would I like to be able to filter, group, or split those fields?

Last Modified Jul 23, 2025

Create an Ingest Pipeline

In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization.

Exercise: Create Ingest Pipeline

1. Open the Ingest Processor SCS Tenant using the connection details provided in the Splunk Show event.

Launch Splunk Cloud Platform Launch Splunk Cloud Platform

Note

When you open the Ingest Processor SCS Tenant, if you are taken to a welcome page, click on Launch under Splunk Cloud Platform to be taken to the Data Management page where you will configure the Ingest Pipeline.

Launch Splunk Cloud Platform Launch Splunk Cloud Platform

2. From the Splunk Data Management console select PipelinesNew pipelineIngest Processor pipeline.

New Ingest Processor Pipeline New Ingest Processor Pipeline

3. In the Get started step of the Ingest Processor configuration page select Blank Pipeline and click Next.

Blank Ingest Processor Pipeline Blank Ingest Processor Pipeline

4. In the Define your pipeline’s partition step of the Ingest Processor configuration page select Partition by sourcetype. Select the = equals Operator and enter kube:apiserver:audit:USER_ID (Be sure to replace USER_ID with the User ID you were assigned) for the value. Click Apply.

Add Partition Add Partition

5. Click Next

6. In the Add sample data step of the Ingest Processor configuration page select Capture new snapshot. Enter k8s_audit_USER_ID (Be sure to replace USER_ID with the User ID you were assigned) for the Snapshot name and click Capture.

Capture Snapshot Capture Snapshot

7. Make sure your newly created snapshot (k8s_audit_USER_ID) is selected and then click Next.

Configure Snapshot Sourcetype Configure Snapshot Sourcetype

8. In the Select a metrics destination step of the Ingest Processor configuration page select show_o11y_org. Click Next.

Metrics Destination Metrics Destination

9. In the Select a data destination step of the Ingest Processor configuration page select splunk_indexer. Under Specify how you want your events to be routed to an index select Default. Click Done.

Event Routing Event Routing

10. In the Pipeline search field replace the default search with the following.

Note

Replace UNIQUE_FIELD in the metric name with a unique value (such as your initials) which will be used to identify your metric in Observability Cloud.

/*A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination".*/
/* Import logs_to_metrics */
import logs_to_metrics from /splunk/ingest/commands
$pipeline =
| from $source
| thru [
        //define the metric name, type, and value for the Kubernetes Events
        //
        // REPLACE UNIQUE_FIELD WITH YOUR INITIALS
        //
        | logs_to_metrics name="k8s_audit_UNIQUE_FIELD" metrictype="counter" value=1 time=_time
        | into $metrics_destination
    ]
| eval index = "kube_logs"
| into $destination;
New to SPL2?

Here is a breakdown of what the SPL2 query is doing:

  • First, you are importing the built-in logs_to_metrics command which will be used to convert the kubernetes events to metrics.
  • You’re using the source data, which you can see on the right is any event from the kube:apiserver:audit sourcetype.
  • Now, you use the thru command which writes the source dataset to the following command, in this case logs_to_metrics.
  • You can see that the metric name (k8s_audit), metric type (counter), value, and timestamp are all provided for the metric. You’re using a value of 1 for this metric because we want to count the number of times the event occurs.
  • Next, you choose the destination for the metric using the into $metrics_destintation command, which is our Splunk Observability Cloud organization
  • Finally, you can send the raw log events to another destination, in this case another index, so they are retained if we ever need to access them.

11. In the upper-right corner click the Preview button Preview Button Preview Button or press CTRL+Enter (CMD+Enter on Mac). From the Previewing $pipeline dropdown select $metrics_destination. Confirm you are seeing a preview of the metrics that will be sent to Splunk Observability Cloud.

Preview Pipeline Preview Pipeline

12. In the upper-right corner click the Save pipeline button Save Pipeline Button Save Pipeline Button. Enter Kubernetes Audit Logs2Metrics USER_ID for your pipeline name and click Save.

Save Pipeline Dialog Save Pipeline Dialog

13. After clicking save you will be asked if you would like to apply the newly created pipeline. Click Yes, apply.

Apply Pipeline Dialog Apply Pipeline Dialog

Note

The Ingest Pipeline should now be sending metrics to Splunk Observability Cloud. Keep this tab open as it will be used it again in the next section.

In the next step you’ll confirm the pipeline is working by viewing the metrics you just created in Splunk Observability Cloud.

Last Modified Jul 23, 2025

Confirm Metrics in Observability Cloud

Now that an Ingest Pipeline has been configured to convert Kubernetes Audit Logs into metrics and send them to Splunk Observability Cloud the metrics should be available. To confirm the metrics are being collected complete the following steps:

Exercise: Confirm Metrics in Splunk Observability Cloud

1. Login to the Splunk Observability Cloud organization you were invited for the workshop. In the upper-right corner, click the + Icon → Chart to create a new chart.

Create New Chart Create New Chart

2. In the Plot Editor of the newly created chart enter the metric name you used while configuring the Ingest Pipeline.

Review Metric Review Metric

Info

You should see the metric you created in the Ingest Pipeline. Keep this tab open as it will be used again in the next section.

In the next step you will update the ingest pipeline to add dimensions to the metric, so you have additional context for alerting and troubleshooting.