Troubleshooting
This section covers common issues you may encounter when deploying and using the ThousandEyes Enterprise Agent in Kubernetes.
Test Failing with DNS Resolution Error
If your tests are failing with DNS resolution errors, verify DNS from within the ThousandEyes pod:
Common causes:
- Service doesn’t exist in the specified namespace
- Typo in the service name or namespace
- CoreDNS is not functioning properly
Connection Refused Errors
If you’re seeing connection refused errors, check the following:
Common causes:
- No pods backing the service (endpoints are empty)
- Pods are not in Ready state
- Wrong port specified in the test URL
- Service selector doesn’t match pod labels
Network Policy Blocking Traffic
If network policies are blocking traffic from the ThousandEyes agent:
Solution:
Create a network policy to allow traffic from the te-demo namespace to your services:
Agent Pod Not Starting
If the ThousandEyes agent pod is not starting, check the pod status and events:
Common causes:
- Insufficient resources (memory/CPU)
- Invalid or missing TEAGENT_ACCOUNT_TOKEN secret
- Security context capabilities not allowed by Pod Security Policy
- Image pull errors
Solutions:
- Increase memory limits if OOMKilled
- Verify secret is created correctly:
kubectl get secret te-creds -n te-demo -o yaml - Check Pod Security Policy allows NET_ADMIN and SYS_ADMIN capabilities
- Verify image pull:
kubectl describe pod -n te-demo <pod-name>
Agent Not Appearing in ThousandEyes Dashboard
If the agent is running but not appearing in the ThousandEyes dashboard:
Common causes:
- Invalid or incorrect TEAGENT_ACCOUNT_TOKEN
- Network egress blocked (firewall or network policy)
- Agent cannot reach ThousandEyes Cloud servers
Solutions:
- Verify the token is correct and properly base64-encoded
- Check if egress to
*.thousandeyes.comis allowed - Verify the agent can reach the internet:
Data Not Appearing in Splunk Observability Cloud
If ThousandEyes data is not appearing in Splunk:
Verify integration configuration:
- Check the OpenTelemetry integration is configured correctly in ThousandEyes
- Verify the Splunk ingest endpoint URL is correct for your realm
- Confirm the
X-SF-Tokenheader contains a valid Splunk access token - Ensure tests are assigned to the integration
Check test assignment:
Common causes:
- Wrong Splunk realm in endpoint URL
- Invalid or expired Splunk access token
- Tests not assigned to the OpenTelemetry integration
- Integration not enabled or saved properly
High Memory Usage
If the ThousandEyes agent pod is consuming excessive memory:
Solutions:
- Increase memory limits in the deployment:
- Reduce the number of concurrent tests assigned to the agent
- Check if the agent is running unnecessary services
Permission Denied Errors
If you see permission denied errors in the agent logs:
Verify security context:
Solution: Ensure the pod has the required capabilities:
Note
Some Kubernetes clusters with strict Pod Security Policies may not allow these capabilities. You may need to work with your cluster administrators to create an appropriate policy exception.
Getting Help
If you encounter issues not covered in this guide:
- ThousandEyes Support: Contact ThousandEyes support at support.thousandeyes.com
- Splunk Support: For Splunk Observability Cloud issues, visit Splunk Support
- Community Forums:
Tip
When asking for help, always include relevant logs, pod descriptions, and error messages to help troubleshoot more effectively.