90 minutes
Author
Alec ChamberlainThis workshop demonstrates integrating Isovalent Enterprise Platform with Splunk Observability Cloud to provide comprehensive visibility into Kubernetes networking, security, and runtime behavior using eBPF technology.
What You’ll Learn
By the end of this workshop, you will:
- Deploy Amazon EKS with Cilium as the CNI in ENI mode
- Configure Hubble for network observability with L7 visibility
- Install Tetragon for runtime security monitoring
- Integrate eBPF-based metrics with Splunk Observability Cloud using OpenTelemetry
- Monitor network flows, security events, and infrastructure metrics in unified dashboards
- Understand eBPF-powered observability and kube-proxy replacement
Sections
Tip
This integration leverages eBPF (Extended Berkeley Packet Filter) for high-performance, low-overhead observability directly in the Linux kernel.
Prerequisites
- AWS CLI configured with appropriate credentials
- kubectl, eksctl, and Helm 3.x installed
- An AWS account with permissions to create EKS clusters, VPCs, and EC2 instances
- A Splunk Observability Cloud account with access token
- Approximately 90 minutes for complete setup
Benefits of Integration
By connecting Isovalent Enterprise Platform to Splunk Observability Cloud, you gain:
- π Deep visibility: Network flows, L7 protocols (HTTP, DNS, gRPC), and runtime security events
- π High performance: eBPF-based observability with minimal overhead
- π Security insights: Process monitoring, system call tracing, and network policy enforcement
- π Unified dashboards: Cilium, Hubble, and Tetragon metrics alongside infrastructure and APM data
- β‘ Efficient networking: Kube-proxy replacement and native VPC networking with ENI mode
What is Isovalent Enterprise Platform? The Isovalent Enterprise Platform consists of three core components built on eBPF (Extended Berkeley Packet Filter) technology:
Cilium Cloud Native CNI and Network Security
eBPF-based networking and security for Kubernetes Replaces kube-proxy with high-performance eBPF datapath Native support for AWS ENI mode (pods get VPC IP addresses) Network policy enforcement at L3-L7 Transparent encryption and load balancing Hubble Network Observability
Built on top of Ciliumβs eBPF visibility Real-time network flow monitoring L7 protocol visibility (HTTP, DNS, gRPC, Kafka) Flow export and historical data storage (Timescape) Metrics exposed on port 9965 Tetragon Runtime Security and Observability
Required Tools Before starting this workshop, ensure you have the following tools installed:
AWS CLI # Check installation aws –version # Should output: aws-cli/2.x.x or higher kubectl # Check installation kubectl version –client # Should output: Client Version: v1.28.0 or higher eksctl # Check installation eksctl version # Should output: 0.150.0 or higher Helm # Check installation helm version # Should output: version.BuildInfo{Version:"v3.x.x"} AWS Requirements AWS account with permissions to create: EKS clusters VPCs and subnets EC2 instances IAM roles and policies Elastic Network Interfaces AWS CLI configured with credentials (aws configure) Splunk Observability Cloud Youβll need:
Step 1: Add Helm Repositories Add the required Helm repositories:
Add Isovalent Helm repository helm repo add isovalent https://helm.isovalent.com # Add Splunk OpenTelemetry Collector Helm repository helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart # Update Helm repositories helm repo update Step 2: Create EKS Cluster Configuration Create a file named cluster.yaml:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: isovalent-demo region: us-east-1 version: "1.30" iam: withOIDC: true addonsConfig: disableDefaultAddons: true addons: - name: coredns Key Configuration Details:
Step 1: Configure Cilium Enterprise Create a file named cilium-enterprise-values.yaml. Replace <YOUR-EKS-API-SERVER-ENDPOINT> with the endpoint from the previous step (without https:// prefix):
Overview The Splunk OpenTelemetry Collector uses Prometheus receivers to scrape metrics from all Isovalent components. Each component exposes metrics on different ports:
Component Port Metrics Cilium Agent 9962 CNI, networking, policy Cilium Envoy 9964 L7 proxy metrics Cilium Operator 9963 Cluster operations Hubble 9965 Network flows, DNS, HTTP Tetragon 2112 Runtime security events Step 1: Create Configuration File Create a file named splunk-otel-isovalent.yaml with your Splunk credentials:
agent: config: receivers: prometheus/isovalent_cilium: config: scrape_configs: - job_name: 'cilium_metrics_9962' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9962 - job_name: 'hubble_metrics_9965' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9965 prometheus/isovalent_envoy: config: scrape_configs: - job_name: 'envoy_metrics_9964' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium-envoy - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9964 prometheus/isovalent_operator: config: scrape_configs: - job_name: 'cilium_operator_metrics_9963' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_io_cilium_app] action: keep regex: operator prometheus/isovalent_tetragon: config: scrape_configs: - job_name: 'tetragon_metrics_2112' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name] action: keep regex: tetragon - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:2112 processors: filter/includemetrics: metrics: include: match_type: strict metric_names: # Cilium metrics - cilium_endpoint_state - cilium_bpf_map_ops_total - cilium_policy_l7_total # Hubble metrics - hubble_flows_processed_total - hubble_drop_total - hubble_dns_queries_total - hubble_http_requests_total # Tetragon metrics - tetragon_dns_total - tetragon_http_response_total service: pipelines: metrics: receivers: - prometheus/isovalent_cilium - prometheus/isovalent_envoy - prometheus/isovalent_operator - prometheus/isovalent_tetragon - hostmetrics - kubeletstats processors: - filter/includemetrics - resourcedetection clusterName: isovalent-demo splunkObservability: accessToken: <YOUR-SPLUNK-ACCESS-TOKEN> realm: <YOUR-SPLUNK-REALM> cloudProvider: aws distribution: eks Important: Replace:
Verify All Components Run this comprehensive check to ensure everything is running:
echo "=== Cluster Nodes ===" kubectl get nodes echo -e "\n=== Cilium Components ===" kubectl get pods -n kube-system -l k8s-app=cilium echo -e "\n=== Hubble Components ===" kubectl get pods -n kube-system | grep hubble echo -e "\n=== Tetragon ===" kubectl get pods -n tetragon echo -e "\n=== Splunk OTel Collector ===" kubectl get pods -n otel-splunk Expected Output:
Subsections of Isovalent Splunk Observability Integration
Overview
The Isovalent Enterprise Platform consists of three core components built on eBPF (Extended Berkeley Packet Filter) technology:
Cilium
Cloud Native CNI and Network Security
- eBPF-based networking and security for Kubernetes
- Replaces kube-proxy with high-performance eBPF datapath
- Native support for AWS ENI mode (pods get VPC IP addresses)
- Network policy enforcement at L3-L7
- Transparent encryption and load balancing
Hubble
Network Observability
- Built on top of Cilium’s eBPF visibility
- Real-time network flow monitoring
- L7 protocol visibility (HTTP, DNS, gRPC, Kafka)
- Flow export and historical data storage (Timescape)
- Metrics exposed on port 9965
Tetragon
Runtime Security and Observability
- eBPF-based runtime security
- Process execution monitoring
- System call tracing
- File access tracking
- Security event metrics on port 2112
Architecture
graph TB
subgraph AWS["Amazon Web Services"]
subgraph EKS["EKS Cluster"]
subgraph Node["Worker Node"]
CA["Cilium Agent<br/>:9962"]
CE["Cilium Envoy<br/>:9964"]
HA["Hubble<br/>:9965"]
TE["Tetragon<br/>:2112"]
OC["OTel Collector"]
end
CO["Cilium Operator<br/>:9963"]
HR["Hubble Relay"]
end
end
subgraph Splunk["Splunk Observability Cloud"]
IM["Infrastructure Monitoring"]
DB["Dashboards"]
end
CA -.->|"Scrape"| OC
CE -.->|"Scrape"| OC
HA -.->|"Scrape"| OC
TE -.->|"Scrape"| OC
CO -.->|"Scrape"| OC
OC ==>|"OTLP/HTTP"| IM
IM --> DBKey Components
| Component | Service Name | Port | Purpose |
|---|
| Cilium Agent | cilium-agent | 9962 | CNI, network policies, eBPF programs |
| Cilium Envoy | cilium-envoy | 9964 | L7 proxy for HTTP, gRPC |
| Cilium Operator | cilium-operator | 9963 | Cluster-wide operations |
| Hubble | hubble-metrics | 9965 | Network flow metrics |
| Tetragon | tetragon | 2112 | Runtime security metrics |
Benefits of eBPF
- High Performance: Runs in the Linux kernel with minimal overhead
- Safety: Verifier ensures programs are safe to run
- Flexibility: Dynamic instrumentation without kernel modules
- Visibility: Deep insights into network and system behavior
Note
This integration provides visibility into Kubernetes networking at a level not possible with traditional CNI plugins.
Prerequisites
Before starting this workshop, ensure you have the following tools installed:
AWS CLI
# Check installation
aws --version
# Should output: aws-cli/2.x.x or higher
kubectl
# Check installation
kubectl version --client
# Should output: Client Version: v1.28.0 or higher
eksctl
# Check installation
eksctl version
# Should output: 0.150.0 or higher
Helm
# Check installation
helm version
# Should output: version.BuildInfo{Version:"v3.x.x"}
AWS Requirements
- AWS account with permissions to create:
- EKS clusters
- VPCs and subnets
- EC2 instances
- IAM roles and policies
- Elastic Network Interfaces
- AWS CLI configured with credentials (
aws configure)
Splunk Observability Cloud
You’ll need:
- A Splunk Observability Cloud account
- An Access Token with ingest permissions
- Your Realm identifier (e.g., us1, us2, eu0)
Getting Splunk Credentials
In Splunk Observability Cloud:
- Navigate to Settings β Access Tokens
- Create a new token with Ingest permissions
- Note your realm from the URL:
https://app.<realm>.signalfx.com
Cost Considerations
AWS Costs (Approximate)
- EKS Control Plane: ~$73/month
- EC2 Nodes (2x m5.xlarge): ~$280/month
- Data Transfer: Variable
- EBS Volumes: ~$20/month
Estimated Total: ~$380-400/month for lab environment
Splunk Costs
- Based on metrics volume (DPM - Data Points per Minute)
- Free trial available for testing
Warning
Remember to clean up resources after completing the workshop to avoid ongoing charges.
Time Estimate
- EKS Cluster Creation: 15-20 minutes
- Cilium Installation: 10-15 minutes
- Integration Setup: 10 minutes
- Total: Approximately 90 minutes
EKS Setup
Step 1: Add Helm Repositories
Add the required Helm repositories:
# Add Isovalent Helm repository
helm repo add isovalent https://helm.isovalent.com
# Add Splunk OpenTelemetry Collector Helm repository
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
# Update Helm repositories
helm repo update
Step 2: Create EKS Cluster Configuration
Create a file named cluster.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: isovalent-demo
region: us-east-1
version: "1.30"
iam:
withOIDC: true
addonsConfig:
disableDefaultAddons: true
addons:
- name: coredns
Key Configuration Details:
disableDefaultAddons: true - Disables AWS VPC CNI and kube-proxy (Cilium will replace both)withOIDC: true - Enables IAM roles for service accounts (required for Cilium to manage ENIs)coredns addon is retained as it’s needed for DNS resolution
Why Disable Default Addons?
Cilium provides its own CNI implementation using eBPF, which is more performant than the default AWS VPC CNI. By disabling the defaults, we avoid conflicts and let Cilium handle all networking.
Step 3: Create the EKS Cluster
Create the cluster (this takes approximately 15-20 minutes):
eksctl create cluster -f cluster.yaml
Verify the cluster is created:
# Update kubeconfig
aws eks update-kubeconfig --name isovalent-demo --region us-east-1
# Check pods
kubectl get pods -n kube-system
Expected Output:
- CoreDNS pods will be in
Pending state (this is normal - they’re waiting for the CNI) - No worker nodes yet
Note
Without a CNI plugin, pods cannot get IP addresses or network connectivity. CoreDNS will remain pending until Cilium is installed.
Step 4: Get Kubernetes API Server Endpoint
You’ll need this for the Cilium configuration:
aws eks describe-cluster --name isovalent-demo --region us-east-1 \
--query 'cluster.endpoint' --output text
Save this endpoint - you’ll use it in the Cilium installation step.
Step 5: Install Prometheus CRDs
Cilium uses Prometheus ServiceMonitor CRDs for metrics:
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.68.0/stripped-down-crds.yaml
Next Steps
With the EKS cluster created, you’re ready to install Cilium, Hubble, and Tetragon.
Cilium Installation
Create a file named cilium-enterprise-values.yaml. Replace <YOUR-EKS-API-SERVER-ENDPOINT> with the endpoint from the previous step (without https:// prefix):
# Configure unique cluster name & ID
cluster:
name: isovalent-demo
id: 0
# Configure ENI specifics
eni:
enabled: true
updateEC2AdapterLimitViaAPI: true
awsEnablePrefixDelegation: true
enableIPv4Masquerade: false
loadBalancer:
serviceTopology: true
ipam:
mode: eni
routingMode: native
# BPF / KubeProxyReplacement
kubeProxyReplacement: "true"
k8sServiceHost: <YOUR-EKS-API-SERVER-ENDPOINT>
k8sServicePort: 443
# Configure TLS configuration
tls:
ca:
certValidityDuration: 3650 # 10 years
# Enable Cilium Hubble for visibility
hubble:
enabled: true
metrics:
enableOpenMetrics: true
enabled:
- dns:labelsContext=source_namespace,destination_namespace
- drop:labelsContext=source_namespace,destination_namespace
- tcp:labelsContext=source_namespace,destination_namespace
- port-distribution:labelsContext=source_namespace,destination_namespace
- icmp:labelsContext=source_namespace,destination_namespace
- flow:sourceContext=workload-name|reserved-identity
- "httpV2:exemplars=true;labelsContext=source_namespace,destination_namespace"
- "policy:labelsContext=source_namespace,destination_namespace"
serviceMonitor:
enabled: true
tls:
enabled: true
auto:
enabled: true
method: cronJob
certValidityDuration: 1095 # 3 years
relay:
enabled: true
tls:
server:
enabled: true
prometheus:
enabled: true
serviceMonitor:
enabled: true
timescape:
enabled: true
# Enable Cilium Operator metrics
operator:
prometheus:
enabled: true
serviceMonitor:
enabled: true
# Enable Cilium Agent metrics
prometheus:
enabled: true
serviceMonitor:
enabled: true
# Configure Cilium Envoy
envoy:
prometheus:
enabled: true
serviceMonitor:
enabled: true
# Enable DNS Proxy HA support
extraConfig:
external-dns-proxy: "true"
enterprise:
featureGate:
approved:
- DNSProxyHA
- HubbleTimescape
Configuration Highlights
- ENI Mode: Pods get native VPC IP addresses
- Kube-Proxy Replacement: eBPF-based service load balancing
- Hubble: Network observability with L7 visibility
- Timescape: Historical network flow storage
Step 2: Install Cilium Enterprise
Install Cilium using Helm:
helm install cilium isovalent/cilium --version 1.18.4 \
--namespace kube-system -f cilium-enterprise-values.yaml
Note
The installation may initially show pending jobs. This is expected - proceed to create nodes.
Step 3: Create Node Group
Create a file named nodegroup.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: isovalent-demo
region: us-east-1
managedNodeGroups:
- name: standard
instanceType: m5.xlarge
desiredCapacity: 2
privateNetworking: true
Create the node group (this takes 5-10 minutes):
eksctl create nodegroup -f nodegroup.yaml
Step 4: Verify Cilium Installation
Once nodes are ready, verify all components:
# Check nodes
kubectl get nodes
# Check Cilium pods
kubectl get pods -n kube-system -l k8s-app=cilium
# Check all Cilium components
kubectl get pods -n kube-system | grep -E "(cilium|hubble)"
Expected Output:
- 2 nodes in
Ready state - Cilium pods running (1 per node)
- Hubble relay and timescape running
- Cilium operator running
Step 5: Install Tetragon
Install Tetragon for runtime security:
helm install tetragon isovalent/tetragon --version 1.18.0 \
--namespace tetragon --create-namespace
Verify installation:
kubectl get pods -n tetragon
What you’ll see: Tetragon runs as a DaemonSet (one pod per node) plus an operator.
Step 6: Install Cilium DNS Proxy HA
Create a file named cilium-dns-proxy-ha-values.yaml:
enableCriticalPriorityClass: true
metrics:
serviceMonitor:
enabled: true
Install DNS Proxy HA:
helm upgrade -i cilium-dnsproxy isovalent/cilium-dnsproxy --version 1.16.7 \
-n kube-system -f cilium-dns-proxy-ha-values.yaml
Verify:
kubectl rollout status -n kube-system ds/cilium-dnsproxy --watch
Success
You now have a fully functional EKS cluster with Cilium CNI, Hubble observability, and Tetragon security!
Splunk Integration
Overview
The Splunk OpenTelemetry Collector uses Prometheus receivers to scrape metrics from all Isovalent components. Each component exposes metrics on different ports:
| Component | Port | Metrics |
|---|
| Cilium Agent | 9962 | CNI, networking, policy |
| Cilium Envoy | 9964 | L7 proxy metrics |
| Cilium Operator | 9963 | Cluster operations |
| Hubble | 9965 | Network flows, DNS, HTTP |
| Tetragon | 2112 | Runtime security events |
Step 1: Create Configuration File
Create a file named splunk-otel-isovalent.yaml with your Splunk credentials:
agent:
config:
receivers:
prometheus/isovalent_cilium:
config:
scrape_configs:
- job_name: 'cilium_metrics_9962'
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: cilium
- source_labels: [__meta_kubernetes_pod_ip]
target_label: __address__
replacement: ${__meta_kubernetes_pod_ip}:9962
- job_name: 'hubble_metrics_9965'
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: cilium
- source_labels: [__meta_kubernetes_pod_ip]
target_label: __address__
replacement: ${__meta_kubernetes_pod_ip}:9965
prometheus/isovalent_envoy:
config:
scrape_configs:
- job_name: 'envoy_metrics_9964'
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: cilium-envoy
- source_labels: [__meta_kubernetes_pod_ip]
target_label: __address__
replacement: ${__meta_kubernetes_pod_ip}:9964
prometheus/isovalent_operator:
config:
scrape_configs:
- job_name: 'cilium_operator_metrics_9963'
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_io_cilium_app]
action: keep
regex: operator
prometheus/isovalent_tetragon:
config:
scrape_configs:
- job_name: 'tetragon_metrics_2112'
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
action: keep
regex: tetragon
- source_labels: [__meta_kubernetes_pod_ip]
target_label: __address__
replacement: ${__meta_kubernetes_pod_ip}:2112
processors:
filter/includemetrics:
metrics:
include:
match_type: strict
metric_names:
# Cilium metrics
- cilium_endpoint_state
- cilium_bpf_map_ops_total
- cilium_policy_l7_total
# Hubble metrics
- hubble_flows_processed_total
- hubble_drop_total
- hubble_dns_queries_total
- hubble_http_requests_total
# Tetragon metrics
- tetragon_dns_total
- tetragon_http_response_total
service:
pipelines:
metrics:
receivers:
- prometheus/isovalent_cilium
- prometheus/isovalent_envoy
- prometheus/isovalent_operator
- prometheus/isovalent_tetragon
- hostmetrics
- kubeletstats
processors:
- filter/includemetrics
- resourcedetection
clusterName: isovalent-demo
splunkObservability:
accessToken: <YOUR-SPLUNK-ACCESS-TOKEN>
realm: <YOUR-SPLUNK-REALM>
cloudProvider: aws
distribution: eks
Important: Replace:
<YOUR-SPLUNK-ACCESS-TOKEN> with your Splunk Observability Cloud access token<YOUR-SPLUNK-REALM> with your realm (e.g., us1, us2, eu0)
Metric Filtering
The configuration includes a metric filter to prevent overwhelming Splunk with high-volume metrics. Only the most valuable metrics for monitoring are included.
Step 2: Install Splunk OpenTelemetry Collector
Install the collector:
helm upgrade --install splunk-otel-collector \
splunk-otel-collector-chart/splunk-otel-collector \
-n otel-splunk --create-namespace \
-f splunk-otel-isovalent.yaml
Wait for rollout to complete:
kubectl rollout status daemonset/splunk-otel-collector-agent -n otel-splunk --timeout=60s
Step 3: Verify Metrics Collection
Check that the collector is scraping metrics:
kubectl logs -n otel-splunk -l app=splunk-otel-collector --tail=100 | grep -i "cilium\|hubble\|tetragon"
You should see log entries indicating successful scraping of each component.
Next Steps
Metrics are now flowing to Splunk Observability Cloud! Proceed to verification to check the dashboards.
Verification
Verify All Components
Run this comprehensive check to ensure everything is running:
echo "=== Cluster Nodes ==="
kubectl get nodes
echo -e "\n=== Cilium Components ==="
kubectl get pods -n kube-system -l k8s-app=cilium
echo -e "\n=== Hubble Components ==="
kubectl get pods -n kube-system | grep hubble
echo -e "\n=== Tetragon ==="
kubectl get pods -n tetragon
echo -e "\n=== Splunk OTel Collector ==="
kubectl get pods -n otel-splunk
Expected Output:
- 2 nodes in
Ready state - Cilium pods: 2 running (one per node)
- Hubble relay and timescape: running
- Tetragon pods: 2 running + operator
- Splunk collector pods: running
Verify Metrics Endpoints
Test that metrics are accessible from each component:
# Test Cilium metrics
kubectl exec -n kube-system ds/cilium -- curl -s localhost:9962/metrics | head -20
# Test Hubble metrics
kubectl exec -n kube-system ds/cilium -- curl -s localhost:9965/metrics | head -20
# Test Tetragon metrics
kubectl exec -n tetragon ds/tetragon -- curl -s localhost:2112/metrics | head -20
Each command should return Prometheus-formatted metrics.
Verify in Splunk Observability Cloud
Check Infrastructure Navigator
- Log in to your Splunk Observability Cloud account
- Navigate to Infrastructure β Kubernetes
- Find your cluster:
isovalent-demo - Verify the cluster is reporting metrics
Search for Isovalent Metrics
Navigate to Metrics and search for:
cilium_* - Cilium networking metricshubble_* - Network flow metricstetragon_* - Runtime security metrics
Tip
It may take 2-3 minutes after installation for metrics to start appearing in Splunk Observability Cloud.
View Dashboards
Create Custom Dashboard
- Navigate to Dashboards β Create
- Add charts for key metrics:
Cilium Endpoint State:
cilium_endpoint_state{cluster="isovalent-demo"}
Hubble Flow Processing:
hubble_flows_processed_total{cluster="isovalent-demo"}
Tetragon Events:
tetragon_dns_total{cluster="isovalent-demo"}
Example Queries
DNS Query Rate:
rate(hubble_dns_queries_total{cluster="isovalent-demo"}[1m])
Dropped Packets:
sum by (reason) (hubble_drop_total{cluster="isovalent-demo"})
Network Policy Enforcements:
rate(cilium_policy_l7_total{cluster="isovalent-demo"}[5m])
Troubleshooting
No Metrics in Splunk
If you don’t see metrics:
Check collector logs:
kubectl logs -n otel-splunk -l app=splunk-otel-collector --tail=200
Verify scrape targets:
kubectl describe configmap -n otel-splunk splunk-otel-collector-otel-agent
Check network connectivity:
kubectl exec -n otel-splunk -it deployment/splunk-otel-collector -- \
curl -v https://ingest.<YOUR-REALM>.signalfx.com
Pods Not Running
If Cilium or Tetragon pods are not running:
Check pod status:
kubectl describe pod -n kube-system <cilium-pod-name>
View logs:
kubectl logs -n kube-system <cilium-pod-name>
Verify node readiness:
kubectl get nodes -o wide
Cleanup
To remove all resources and avoid AWS charges:
# Delete the OpenTelemetry Collector
helm uninstall splunk-otel-collector -n otel-splunk
# Delete the EKS cluster (this removes everything)
eksctl delete cluster --name isovalent-demo --region us-east-1
Warning
The cleanup process takes 10-15 minutes. Ensure all resources are deleted to avoid charges.
Next Steps
Now that your integration is working:
- Deploy sample applications to generate network traffic
- Create network policies and monitor enforcement
- Set up alerts in Splunk for dropped packets or security events
- Explore Hubble’s L7 visibility for HTTP/gRPC traffic
- Use Tetragon to monitor process execution and file access
Success!
Congratulations! You’ve successfully integrated Isovalent Enterprise Platform with Splunk Observability Cloud.