Splunk Integration
Overview
The Splunk OpenTelemetry Collector uses Prometheus receivers to scrape metrics from all Isovalent components. Each component exposes metrics on different ports, and because Cilium and Hubble share the same pods (just different ports), we configure separate receivers for each one rather than relying on pod annotations.
| Component | Port | What it provides |
|---|---|---|
| Cilium Agent | 9962 | eBPF datapath, policy enforcement, IPAM, BPF map stats |
| Cilium Envoy | 9964 | L7 proxy metrics (HTTP, gRPC) |
| Cilium Operator | 9963 | Cluster-wide identity and endpoint management |
| Hubble | 9965 | Network flows, DNS, HTTP L7, TCP flags, policy verdicts |
| Tetragon | 2112 | Runtime security, socket stats, network flow events |
Step 1: Create Configuration File
Create a file named splunk-otel-collector-values.yaml. Replace the credential placeholders with your actual values.
Important: Replace:
<YOUR-SPLUNK-ACCESS-TOKEN>with your Splunk Observability Cloud access token<YOUR-SPLUNK-REALM>with your realm (e.g., us1, us2, eu0)
Why we use a strict metric allowlist
Cilium can emit thousands of unique metric series when you factor in all the label combinations for workloads, namespaces, and protocol details. Without the filter/includemetrics allowlist, a busy cluster can easily generate 50,000+ active series and overwhelm Splunk’s ingestion. The list above is curated to include exactly the metrics that power the Cilium and Hubble dashboards, plus the Tetragon socket stats needed for Network Explorer. If you add new dashboards later, you can add metrics to this list.
What Tetragon socket stats enable
The tetragon_socket_stats_* metrics are what make per-connection latency and throughput analysis possible in Splunk’s Network Explorer. srtt_count/srtt_sum give you average TCP round-trip time per workload. retransmitsegs_total surfaces packet loss and congestion. txbytes/rxbytes show bandwidth per connection. None of this is visible through APM or standard infrastructure metrics.
Step 2: Install Splunk OpenTelemetry Collector
Install the collector:
Wait for rollout to complete:
Step 3: Verify Metrics Collection
Check that the collector is scraping metrics:
You should see log entries indicating successful scraping of each component.
Next Steps
Metrics are now flowing to Splunk Observability Cloud! Proceed to verification to check the dashboards.