Isovalent Enterprise Platform Integration with Splunk Observability Cloud
90 minutes Author Alec ChamberlainThis workshop demonstrates integrating Isovalent Enterprise Platform with Splunk Observability Cloud to provide comprehensive visibility into Kubernetes networking, security, and runtime behavior using eBPF technology.
What You’ll Learn
By the end of this workshop, you will:
- Deploy Amazon EKS with Cilium as the CNI in ENI mode
- Configure Hubble for network observability with L7 visibility
- Install Tetragon for runtime security monitoring
- Integrate eBPF-based metrics with Splunk Observability Cloud using OpenTelemetry
- Monitor network flows, security events, and infrastructure metrics in unified dashboards
- Understand eBPF-powered observability and kube-proxy replacement
Sections
- Overview - Understand Cilium architecture and eBPF fundamentals
- Prerequisites - Required tools and access
- EKS Setup - Create EKS cluster for Cilium
- Cilium Installation - Deploy Cilium, Hubble, and Tetragon
- Splunk Integration - Connect metrics to Splunk Observability Cloud
- Verification - Validate the integration
Tip
This integration leverages eBPF (Extended Berkeley Packet Filter) for high-performance, low-overhead observability directly in the Linux kernel.
Prerequisites
- AWS CLI configured with appropriate credentials
- kubectl, eksctl, and Helm 3.x installed
- An AWS account with permissions to create EKS clusters, VPCs, and EC2 instances
- A Splunk Observability Cloud account with access token
- Approximately 90 minutes for complete setup
Benefits of Integration
By connecting Isovalent Enterprise Platform to Splunk Observability Cloud, you gain:
- 🔍 Deep visibility: Network flows, L7 protocols (HTTP, DNS, gRPC), and runtime security events
- 🚀 High performance: eBPF-based observability with minimal overhead
- 🔐 Security insights: Process monitoring, system call tracing, and network policy enforcement
- 📊 Unified dashboards: Cilium, Hubble, and Tetragon metrics alongside infrastructure and APM data
- ⚡ Efficient networking: Kube-proxy replacement and native VPC networking with ENI mode
- What is Isovalent Enterprise Platform? The Isovalent Enterprise Platform consists of three core components built on eBPF (Extended Berkeley Packet Filter) technology: Cilium Cloud Native CNI and Network Security eBPF-based networking and security for Kubernetes Replaces kube-proxy with high-performance eBPF datapath Native support for AWS ENI mode (pods get VPC IP addresses) Network policy enforcement at L3-L7 Transparent encryption and load balancing Hubble Network Observability Built on top of Cilium’s eBPF visibility Real-time network flow monitoring L7 protocol visibility (HTTP, DNS, gRPC, Kafka) Flow export and historical data storage (Timescape) Metrics exposed on port 9965 Tetragon Runtime Security and Observability
- Required Tools Before starting this workshop, ensure you have the following tools installed: AWS CLI # Check installation aws –version # Should output: aws-cli/2.x.x or higher kubectl # Check installation kubectl version –client # Should output: Client Version: v1.28.0 or higher eksctl # Check installation eksctl version # Should output: 0.150.0 or higher Helm # Check installation helm version # Should output: version.BuildInfo{Version:"v3.x.x"} AWS Requirements AWS account with permissions to create: EKS clusters VPCs and subnets EC2 instances IAM roles and policies Elastic Network Interfaces AWS CLI configured with credentials (aws configure) Splunk Observability Cloud You’ll need:
Step 1: Add Helm Repositories Add the required Helm repositories:
Add Isovalent Helm repository helm repo add isovalent https://helm.isovalent.com # Add Splunk OpenTelemetry Collector Helm repository helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart # Update Helm repositories helm repo update Step 2: Create EKS Cluster Configuration Create a file named cluster.yaml:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: isovalent-demo region: us-east-1 version: "1.30" iam: withOIDC: true addonsConfig: disableDefaultAddons: true addons: - name: coredns Key Configuration Details:
Step 1: Configure Cilium Enterprise Create a file named cilium-enterprise-values.yaml. Replace <YOUR-EKS-API-SERVER-ENDPOINT> with the endpoint from the previous step (without https:// prefix):
Configure unique cluster name & ID cluster: name: isovalent-demo id: 0 # Configure ENI specifics eni: enabled: true updateEC2AdapterLimitViaAPI: true awsEnablePrefixDelegation: true enableIPv4Masquerade: false loadBalancer: serviceTopology: true ipam: mode: eni routingMode: native # BPF / KubeProxyReplacement kubeProxyReplacement: "true" k8sServiceHost: <YOUR-EKS-API-SERVER-ENDPOINT> k8sServicePort: 443 # Configure TLS configuration tls: ca: certValidityDuration: 3650 # 10 years # Enable Cilium Hubble for visibility hubble: enabled: true metrics: enableOpenMetrics: true enabled: - dns:labelsContext=source_namespace,destination_namespace - drop:labelsContext=source_namespace,destination_namespace - tcp:labelsContext=source_namespace,destination_namespace - port-distribution:labelsContext=source_namespace,destination_namespace - icmp:labelsContext=source_namespace,destination_namespace - flow:sourceContext=workload-name|reserved-identity - "httpV2:exemplars=true;labelsContext=source_namespace,destination_namespace" - "policy:labelsContext=source_namespace,destination_namespace" serviceMonitor: enabled: true tls: enabled: true auto: enabled: true method: cronJob certValidityDuration: 1095 # 3 years relay: enabled: true tls: server: enabled: true prometheus: enabled: true serviceMonitor: enabled: true timescape: enabled: true # Enable Cilium Operator metrics operator: prometheus: enabled: true serviceMonitor: enabled: true # Enable Cilium Agent metrics prometheus: enabled: true serviceMonitor: enabled: true # Configure Cilium Envoy envoy: prometheus: enabled: true serviceMonitor: enabled: true # Enable DNS Proxy HA support extraConfig: external-dns-proxy: "true" enterprise: featureGate: approved: - DNSProxyHA - HubbleTimescape Configuration Highlights ENI Mode: Pods get native VPC IP addresses Kube-Proxy Replacement: eBPF-based service load balancing Hubble: Network observability with L7 visibility Timescape: Historical network flow storage Step 2: Install Cilium Enterprise Install Cilium using Helm:
- Overview The Splunk OpenTelemetry Collector uses Prometheus receivers to scrape metrics from all Isovalent components. Each component exposes metrics on different ports: Component Port Metrics Cilium Agent 9962 CNI, networking, policy Cilium Envoy 9964 L7 proxy metrics Cilium Operator 9963 Cluster operations Hubble 9965 Network flows, DNS, HTTP Tetragon 2112 Runtime security events Step 1: Create Configuration File Create a file named splunk-otel-isovalent.yaml with your Splunk credentials: agent: config: receivers: prometheus/isovalent_cilium: config: scrape_configs: - job_name: 'cilium_metrics_9962' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9962 - job_name: 'hubble_metrics_9965' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9965 prometheus/isovalent_envoy: config: scrape_configs: - job_name: 'envoy_metrics_9964' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_k8s_app] action: keep regex: cilium-envoy - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:9964 prometheus/isovalent_operator: config: scrape_configs: - job_name: 'cilium_operator_metrics_9963' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_io_cilium_app] action: keep regex: operator prometheus/isovalent_tetragon: config: scrape_configs: - job_name: 'tetragon_metrics_2112' metrics_path: /metrics kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name] action: keep regex: tetragon - source_labels: [__meta_kubernetes_pod_ip] target_label: address replacement: ${__meta_kubernetes_pod_ip}:2112 processors: filter/includemetrics: metrics: include: match_type: strict metric_names: # Cilium metrics - cilium_endpoint_state - cilium_bpf_map_ops_total - cilium_policy_l7_total # Hubble metrics - hubble_flows_processed_total - hubble_drop_total - hubble_dns_queries_total - hubble_http_requests_total # Tetragon metrics - tetragon_dns_total - tetragon_http_response_total service: pipelines: metrics: receivers: - prometheus/isovalent_cilium - prometheus/isovalent_envoy - prometheus/isovalent_operator - prometheus/isovalent_tetragon - hostmetrics - kubeletstats processors: - filter/includemetrics - resourcedetection clusterName: isovalent-demo splunkObservability: accessToken: <YOUR-SPLUNK-ACCESS-TOKEN> realm: <YOUR-SPLUNK-REALM> cloudProvider: aws distribution: eks Important: Replace:
- Verify All Components Run this comprehensive check to ensure everything is running: echo "=== Cluster Nodes ===" kubectl get nodes echo -e "\n=== Cilium Components ===" kubectl get pods -n kube-system -l k8s-app=cilium echo -e "\n=== Hubble Components ===" kubectl get pods -n kube-system | grep hubble echo -e "\n=== Tetragon ===" kubectl get pods -n tetragon echo -e "\n=== Splunk OTel Collector ===" kubectl get pods -n otel-splunk Expected Output: