splunk-operator

Getting Started with the Splunk Operator for Kubernetes

The Splunk Operator for Kubernetes enables you to quickly and easily deploy Splunk Enterprise on your choice of private or public cloud provider. The Operator simplifies scaling and management of Splunk Enterprise by automating administrative workflows using Kubernetes best practices.

The Splunk Operator runs as a container, and uses the Kubernetes operator pattern and custom resources objects to create and manage a scalable and sustainable Splunk Enterprise environment.

This guide is intended to help new users get up and running with the Splunk Operator for Kubernetes. It is divided into the following sections:

Support Resources

SPLUNK SUPPORTED: The Splunk Operator for Kubernetes is a supported method for deploying distributed Splunk Enterprise environments using containers. The Splunk Operator is categorized as an Extension and subject to the support terms found here. Splunk Enterprise deployed using the Splunk Operator is subject to the applicable support level offered here.

COMMUNITY DEVELOPED: Splunk Operator for Kubernetes is an open source product developed by Splunkers with contributions from the community of partners and customers. The primary reason why Splunk is taking this approach is to push product development closer to those that use and depend upon it. This direct connection will help us all be more successful and move at a rapid pace.

If you’re interested in contributing to the SOK open source project, review the Contributing to the Project page.

Community Support & Discussions on Slack channel #splunk-operator-for-kubernetes

File Issues or Enhancements in GitHub splunk/splunk-operator

Known Issues for the Splunk Operator

Review the Change Log page for a history of changes in each release.

Prerequisites for the Splunk Operator

Please check release notes for supportability matrix

Platform recommendations

The Splunk Operator should work with any CNCF certified distribution of Kubernetes. We do not have platform recommendations, but this is a table of platforms that our developers, customers, and partners have used successfully with the Splunk Operator.

Splunk Development & Testing Platforms Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE)
Customer Reported Platforms Microsoft Azure Kubernetes Service (AKS), Red Hat OpenShift
Partner Tested Platforms HPE Ezmeral
Other Platforms CNCF certified distribution

Splunk Enterprise Compatibility

Each Splunk Operator release has specific Splunk Enterprise compatibility requirements. Splunk Operator can support more than one version of Splunk Enterprise release. Before installing or upgrading the Splunk Operator, review the release notes to verify version compatibility with Splunk Enterprise releases.

Each release of splunk-operator is preset to latest release mentioned in release notes, if user wants to change that to any release version specified in release notes, they can simply change envionment variable RELATED_IMAGE_SPLUNK_ENTERPRISE in splunk-operator deployment manifest file.

Splunk Apps Installation

Apps and add-ons can be installed using the Splunk Operator by following the instructions given at Installing Splunk Apps. For the installation of premium apps please refer to Premium Apps Installation Guide.

Docker requirements

The Splunk Operator requires these docker images to be present or available to your Kubernetes cluster:

All of the Splunk Enterprise images are publicly available on Docker Hub. If your cluster does not have access to pull from Docker Hub, see the Required Images Documentation page.

Review the Change Log page for a history of changes and Splunk Enterprise compatibility for each release.

Hardware Resources Requirements

The resource guidelines for running production Splunk Enterprise instances in pods through the Splunk Operator are the same as running Splunk Enterprise natively on a supported operating system and file system. Refer to the Splunk Enterprise Reference Hardware documentation for additional details. We would also recommend following the same guidance on Splunk Enterprise for disabling Transparent Huge Pages (THP) for the nodes in your Kubernetes cluster. Please be aware that this may impact performance of other non-Splunk workloads.

Minimum Reference Hardware

Based on Splunk Enterprise Reference Hardware documentation, a summary of the minimum reference hardware requirements is given below.

Standalone Search Head / Search Head Cluster Indexer Cluster
Each Standalone Pod: 12 Physical CPU Cores or 24 vCPU at 2Ghz or greater per core, 12GB RAM. Each Search Head Pod: 16 Physical CPU Cores or 32 vCPU at 2Ghz or greater per core, 12GB RAM. Each Indexer Pod: 12 Physical CPU cores, or 24 vCPU at 2GHz or greater per core, 12GB RAM.

Using Kubernetes Quality of Service Classes

In addition to the guidelines provided in the reference hardware, Kubernetes Quality of Service Classes can be used to configure CPU/Mem resources allocations that map to your service level objectives. For further information on utilizing Kubernetes Quality of Service (QoS) classes, see the table below:

QoS Summary Description
Guaranteed CPU/Mem requests = CPU/Mem limits When the CPU and memory requests and limits values are equal, the pod is given a QoS class of Guaranteed. This level of service is recommended for Splunk Enterprise _production environments._
Burstable CPU/Mem requests < CPU/Mem limits When the CPU and memory requests value is set lower than the limits the pod is given a QoS class of Burstable. This level of service is useful in a user acceptance testing _(UAT) environment, where the pods run with minimum resources, and Kubernetes allocates additional resources depending on usage._
BestEffort No CPU/Mem requests or limits are set When the requests or limits values are not set, the pod is given a QoS class of BestEffort. This level of service is sufficient for _testing, or a small development task._

Examples on how to implement these QoS are given at Examples of Guaranteed and Burstable QoS section.

Storage guidelines

The Splunk Operator uses Kubernetes Persistent Volume Claims to store all of your Splunk Enterprise configuration (“$SPLUNK_HOME/etc” path) and event (“$SPLUNK_HOME/var” path) data. If one of the underlying machines fail, Kubernetes will automatically try to recover by restarting the Splunk Enterprise pods on another machine that is able to reuse the same data volumes. This minimizes the maintenance burden on your operations team by reducing the impact of common hardware failures to the equivalent of a service restart. The use of Persistent Volume Claims requires that your cluster is configured to support one or more Kubernetes persistent Storage Classes. See the Setting Up a Persistent Storage for Splunk page for more information.

What Storage Type To Use?

The Kubernetes infrastructure must have access to storage that meets or exceeds the recommendations provided in the Splunk Enterprise storage type recommendations at Reference Hardware documentation - what storage type to use for a given role? In summary, Indexers with SmartStore need NVMe or SSD storage to provide the necessary IOPs for a successful Splunk Enterprise environment.

Splunk SmartStore Required

For production environments, we are requiring the use of Splunk SmartStore. As a Splunk Enterprise deployment’s data volume increases, demand for storage typically outpaces demand for compute resources. Splunk’s SmartStore Feature allows you to manage your indexer storage and compute resources in a cost-effective manner by scaling those resources separately. SmartStore utilizes a fast storage cache on each indexer node to keep recent data locally available for search and keep other data in a remote object store. Look into the SmartStore Resource Guide document for configuring and using SmartStore through operator.

Installing the Splunk Operator

A Kubernetes cluster administrator can install and start the Splunk Operator for specific namespace by running:

kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.6.1/splunk-operator-namespace.yaml --server-side  --force-conflicts

A Kubernetes cluster administrator can install and start the Splunk Operator for cluster-wide by running:

kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.6.1/splunk-operator-cluster.yaml --server-side  --force-conflicts

The Advanced Installation Instructions page offers guidance for advanced configurations, including the use of private image registries, installation at cluster scope, and installing the Splunk Operator as a user who is not a Kubernetes administrator. Users of Red Hat OpenShift should review the Red Hat OpenShift page.

Note: We recommended that the Splunk Enterprise Docker image is copied to a private registry, or directly onto your Kubernetes workers before creating large Splunk Enterprise deployments. See the Required Images Documentation page, and the Advanced Installation Instructions page for guidance on working with copies of the Docker images.

After the Splunk Operator starts, you’ll see a single pod running within your current namespace:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
splunk-operator-75f5d4d85b-8pshn   1/1     Running   0          5s

Installation using Helm charts

Installing the Splunk Operator using Helm allows you to quickly deploy the operator and Splunk Enterprise in a Kubernetes cluster. The operator and custom resources are easily configurable allowing for advanced installations including support for Splunk Validated Architectures. Helm also provides a number of features to manage the operator and custom resource lifecycle. The Installation using Helm page will walk you through installing and configuring Splunk Enterprise deployments using Helm charts.

Upgrading the Splunk Operator

For information on upgrading the Splunk Operator, see the How to upgrade Splunk Operator and Splunk Enterprise Deployments page.

Creating a Splunk Enterprise deployment

The Standalone custom resource is used to create a single instance deployment of Splunk Enterprise. For example:

  1. Run the command to create a deployment named “s1”:
cat <<EOF | kubectl apply -n splunk-operator -f -
apiVersion: enterprise.splunk.com/v4
kind: Standalone
metadata:
  name: s1
  finalizers:
  - enterprise.splunk.com/delete-pvc
EOF

The enterprise.splunk.com/delete-pvc finalizer is optional, and tells the Splunk Operator to remove any Kubernetes Persistent Volumes associated with the instance if you delete the custom resource(CR).

Within a few minutes, you’ll see new pods running in your namespace:

$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
splunk-operator-7c5599546c-wt4xl        1/1    Running   0          11h
splunk-s1-standalone-0                  1/1    Running   0          45s

Note: if your shell prints a % at the end, leave that out when you copy the output.

  1. You can use a simple network port forward to open port 8000 for Splunk Web access:
kubectl port-forward splunk-s1-standalone-0 8000
  1. Get your passwords for the namespace. The Splunk Enterprise passwords used in the namespace are generated automatically. To learn how to find and read the passwords, see the Reading global kubernetes secret object page.

  2. Log into Splunk Enterprise at http://localhost:8000 using the admin account with the password.

  3. To delete your standalone deployment, run:

kubectl delete standalone s1

The Standalone custom resource is just one of the resources the Splunk Operator provides. You can find more custom resources and the parameters they support on the Custom Resource Guide page.

For additional deployment examples, including Splunk Enterprise clusters, see the Configuring Splunk Enterprise Deployments page.

For additional guidance on making Splunk Enterprise ports accessible outside of Kubernetes, see the Configuring Ingress page.

Contacting Support

If you are a Splunk Enterprise customer with a valid support entitlement contract and have a Splunk-related question, you can open a support case on the https://www.splunk.com/ support portal.