Table of Contents
Overview ↵
About the Splunk Add-on for Linux¶
| Version | 2.1.1 |
| Vendor Products | Linux as supported by CollectD and AuditD. See also Unix operating systems. |
| Add-on has web UI | No. This add-on does not contain any views. |
The Splunk Add-on for Linux allows Splunk to collect Linux-related performance metrics and data generated by the open source CollectD project (see https://collectd.org/) using HTTP Event Collector (HEC) or Transmission Control Protocol (TCP). The Splunk Add-on for Linux collects the following types of data:
- CPU metrics
- Memory metrics
- Swap metrics
- Mount point and file system usage
- Network interface traffic
- Disk utilization
- System load
- Process information
- Network protocol information
- IRQ metrics
- TCP connections information
- Thermal information
- System uptime statistics
Release notes¶
For a summary of new features, fixed issues, and known issues, and for more information on release history, see Release notes for the Splunk Add-on for Linux.
Compatibility¶
This add-on provides the inputs and Common Information Model (CIM)-compatible knowledge to use with other Splunk apps, such as Splunk Enterprise Security and the Splunk App for PCI Compliance.
For detailed information about compatibility with other software, CIM versions, and platforms, see Release notes for the Splunk Add-on for Linux.
Source types and lookups¶
For more information about the source types for Splunk Add-on for Linux, see Source types.
Download the add-on¶
Download the Splunk Add-on for Linux from Splunkbase.
Install and configure the add-on¶
For information about installing and configuring the Splunk Add-on for Linux, see Installation and configuration overview for Splunk Add-on for Linux.
Hardware and software requirements¶
For more information, see Hardware and software requirements.
Additional resources¶
Search the Splunk Community page for questions related to the Splunk Add-on for Linux.
Release notes for the Splunk Add-on for Linux¶
Version 2.1.1 of the Splunk Add-on for Linux was released on March 30, 2026.
Compatibility¶
Version 2.1.1 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms:
| Component | Description |
|---|---|
| Splunk platform version | 8.1.x, 8.2.x, 9.0.x |
| CIM | 5.0.1 |
| Supported OS for data collection | Linux |
| Vendor products | Red Hat 7.8, Red Hat 8.5, Red Hat 8.6, CentOS 7, CentOS Stream 8.2015, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04, SUSE 15 SP3, Debian 9, Debian 10.9, Debian 11. |
Note
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
Upgrade¶
Follow the installation instructions to upgrade an existing installation of the Splunk Add-on for Linux. See Install the Splunk Add-on for Linux.
No data migration is required to upgrade the Splunk Add-on for Linux from version 2.0.0 to version 2.1.1.
New features¶
Version 2.1.1 of the Splunk Add-on for Linux has the following new features.
- Increased MAX_TIMESTAMP_LOOKAHEAD for linux audit
- Support for CIM v 5.0.1
- Support for Ubuntu v22.04 and Red Hat v8.6 OS
- Added the
linux_audit_endpoint_serviceseventtype for SERVICE_START and SERVICE_STOP audit logs. - Removed the extraction for
user=unsetfortype=USER_LOGINin the audit logs.
Fixed issues¶
Version 2.1.1 of the Splunk Add-on for Linux has the following, if any, fixed issues. If no issues appear in this section, no issues have yet been reported:
Known issues¶
Version 2.1.1 of the Splunk Add-on for Linux contains the following known issues. If no issues appear in this section, no issues have yet been reported:
Release history for the Splunk Add-on for Linux¶
Latest release¶
The latest version of the Splunk Add-on for Linux is version 2.1.1. See Release notes for the Splunk Add-on for Linux for the release notes of this latest version.
Version 2.1.0¶
Version 2.1.0 of the Splunk Add-on for Linux was released on July 25, 2022.
Compatibility¶
Version 2.1.0 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms:
| Component | Description |
|---|---|
| Splunk platform version | 8.1.x, 8.2.x, 9.0.x |
| CIM | 5.0.1 |
| Supported OS for data collection | Linux |
| Vendor products | Red Hat 7.8, Red Hat 8.5, Red Hat 8.6, CentOS 7, CentOS Stream 8.2015, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04, SUSE 15 SP3, Debian 9, Debian 10.9, Debian 11. |
Note
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
Upgrade¶
Follow the installation instructions to upgrade an existing installation of the Splunk Add-on for Linux. See Install the Splunk Add-on for Linux.
No data migration is required to upgrade the Splunk Add-on for Linux from version 2.0.0 to version 2.1.0.
New features¶
Version 2.1.0 of the Splunk Add-on for Linux has the following new features.
- Support for CIM v 5.0.1
- Support for Ubuntu v22.04 and Red Hat v8.6 OS
- Added the
linux_audit_endpoint_serviceseventtype for SERVICE_START and SERVICE_STOP audit logs. - Removed the extraction for
user=unsetfortype=USER_LOGINin the audit logs.
Fixed issues¶
Version 2.1.0 of the Splunk Add-on for Linux has the following, if any, fixed issues. If no issues appear in this section, no issues have yet been reported:
Known issues¶
Version 2.1.0 of the Splunk Add-on for Linux contains the following known issues. If no issues appear in this section, no issues have yet been reported:
Version 2.0.0¶
Version 2.0.0 of the Splunk Add-on for Linux was released on February 8, 2022.
Compatibility¶
Version 2.0.0 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms:
| Component | Description |
|---|---|
| Splunk platform version | 8.1.x, 8.2.x |
| CIM | 4.20 |
| Supported OS for data collection | Linux |
| Vendor products | Red Hat 7.8, Red Hat 8.5, CentOS 7, CentOS Stream 8.2015, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04, SUSE 15 SP3, Debian 9, Debian 10.9, Debian 11. |
Note
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, see the Splunk Enterprise Release Notes.
Upgrade¶
Follow the installation instructions to upgrade an existing installation of the Splunk Add-on for Linux. See Install the Splunk Add-on for Linux.
No data migration is required to upgrade the Splunk Add-on for Linux from version 1.1.1 to version 2.0.0.
New features¶
Version 2.0.0 of the Splunk Add-on for Linux has the following new features.
- New CIM Mapping
Fixed issues¶
Version 2.0.0 of the Splunk Add-on for Linux has the following, if any, fixed issues. If no issues appear in this section, no issues have yet been reported:
Known issues¶
Version 2.0.0 of the Splunk Add-on for Linux contains the following known issues. If no issues appear in this section, no issues have yet been reported:
Version 1.1.1¶
Version 1.1.1 of the Splunk Add-on for Linux was released on May 4, 2021.
Compatibility¶
Version 1.1.1 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms:
| Component | Description |
|---|---|
| Splunk platform version | 7.3.x, 8.0.x, 8.1.x |
| CIM | 4.11 |
| Supported OS for data collection | Linux |
| Vendor products | Linux as supported by CollectD. See also Unix operating systems. |
Note
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
Upgrade¶
Follow the installation instructions to upgrade an existing installation of the Splunk Add-on for Linux. See Install the Splunk Add-on for Linux.
No data migration is required to upgrade the Splunk Add-on for Linux from version 1.0.1 to version 1.1.1.
New features¶
Version 1.1.1 of the Splunk Add-on for Linux has the following new features:
- Removed messages for “restart required” on the SH/SHC during new installations or upgrades.
Fixed issues¶
Version 1.1.1 of the Splunk Add-on for Linux has the following, if any, fixed issues. If no issues appear in this section, no issues have yet been reported:
Known issues¶
Version 1.1.1 of the Splunk Add-on for Linux contains the following known issues. If no issues appear in this section, no issues have yet been reported:
Version 1.1.0¶
Version 1.1.0 of the Splunk Add-on for Linux was released on April 20, 2018.
Compatibility¶
Version 1.1.0 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms:
| Component | Description |
|---|---|
| Splunk platform version | 7.0.x, 7.1.x, 7.2.x, 7.3.x, 8.0 |
| CIM | 4.11 |
| Supported OS for data collection | Linux |
| Vendor products | Linux as supported by CollectD. See also Unix operating systems. |
Note
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
Upgrade¶
Follow the installation instructions to upgrade an existing installation of the Splunk Add-on for Linux. See Install the Splunk Add-on for Linux.
No data migration is required to upgrade the Splunk Add-on for Linux from version 1.0.0 to version 1.1.0.
New features¶
Version 1.1.0 of the Splunk Add-on for Linux has the following new features.
- CollectD metrics data
- AuditD support
Fixed issues¶
Version 1.1.0 of the Splunk Add-on for Linux fixes the following issues:
Known issues¶
Version 1.1.0 of the Splunk Add-on for Linux contains the following known issues:
Version 1.0.0¶
Version 1.0.0 of the Splunk Add-on for Linux is compatible with the following software, CIM versions, and platforms.
| Component | Description |
|---|---|
| Splunk platform versions | 6.4 and 6.5 |
| ITSI | ITSI Model for OS 2.4.0 |
| Platforms | Linux |
| Vendor Products | RHEL/Centos 6.x, 7.2+, Ubuntu/ Debian 12.x, 16.04+, SUSE 12 |
New features¶
Version 1.0.0 of the Splunk Add-on for Linux provides ITSI normalization for Linux metric data gathered from CollectD.
Known issues¶
Version 1.0.0 of the Splunk Add-on for Linux contains the following known issues.
Workaround: To workaround ADDON-11995, ADDON-12258 and ADDON-12259, Splunk recommend you collect data in JSON format or follow the instructions on how to configure the write_graphite CollectD plugin in Configure Collectd to send data to Splunk.
When to use the Splunk Add-on for Linux¶
There are two Splunk supported add-ons applicable for Linux: the Splunk Add-on for Linux and the Splunk Add-on for Unix and Linux. Depending on your use case, you may want to use either or both:
| User scenario | Use this add-on |
|---|---|
| Collect performance metrics from Linux using CollectD | Splunk Add-on for Linux |
| Collect events from Linux using AuditD | Splunk Add-on for Linux |
| Collect performance metrics from Unix hosts or Linux hosts without CollectD | Splunk Add-on for Unix and Linux |
| Collect security events from Unix or Linux hosts | Splunk Add-on for Unix and Linux |
See what data the Splunk Add-on for Unix and Linux collects and what data the Splunk Add-on for Linux collects for more details about the data these two add-ons collect.
Note
The Splunk Add-on for Unix and Linux and the Splunk Add-on for Linux are unrelated add-ons. There is no upgrade or migration from one to the other. If you want to replace the Splunk Add-on for Unix and Linux with the Splunk Add-on for Linux, you must disable any metrics inputs configured in the Splunk Add-on for Unix and Linux to prevent duplicate data collection.
Hardware and software requirements for the Splunk Add-on for Linux¶
Third-party requirements¶
If you want to collect Linux-related data through HEC or TCP, you must first install a CollectD server. CollectD is a daemon which periodically collects system and application performance metrics.
- If you use HEC to receive CollectD data in Splunk, the CollectD server must be version 5.6 or later.
- If you use TCP to receive CollectD data in Splunk, there is no specific version requirement for CollectD.
Note
As a best practice, send your data using HEC.
Splunk platform requirements¶
Because this add-on runs on the Splunk platform, all of the system requirements apply for the Splunk software that you use to run this add-on.
- For Splunk Enterprise system requirements, see System Requirements in the Splunk Enterprise Installation Manual.
- If you plan to run this add-on entirely in Splunk Cloud, there are no additional Splunk platform requirements.
- If you are managing on-premises forwarders to get data into Splunk Cloud, see System Requirements in the Splunk Enterprise Installation Manual, which includes information about forwarders.
Installation and configuration overview for the Splunk Add-on for Linux¶
Complete the following steps to install and configure this add-on:
- Install the Splunk Add-on for Linux.
-
Configure CollectD to send data to the Splunk Add-on for Linux.
a. If you want to collect data in JSON format through HEC, configure HEC inputs for the Splunk Add-on for Linux.
b. If you want to collect data in Graphite format through TCP, configure TCP inputs for the Splunk Add-on for Linux.
-
Configure AuditD to send data to the Splunk Add-on for Linux.
Ended: Overview
Installation ↵
Install the Splunk Add-on for Linux¶
- Get the Splunk Add-on for Linux by downloading it from Splunkbase or browsing to it using the app browser within Splunk Web.
- Determine where and how to install this add-on in your deployment, using the tables on this page.
- Perform any prerequisite steps before installing, if required and specified in the tables on this page.
- Complete your installation.
If you need step-by-step instructions on how to install an add-on in your specific deployment environment, see the installation walkthroughs section at the bottom of this page for links to installation instructions specific to a single-instance deployment, distributed deployment, Splunk Cloud, or Splunk Light.
Distributed deployments¶
Use the tables on this page to determine where and how to install this add-on in a distributed deployment of Splunk Enterprise or any deployment for which you are using forwarders to get your data in. Depending on your environment, your preferences, and the requirements of the add-on, you may need to install the add-on in multiple places.
Where to install this add-on¶
All supported add-ons can be safely installed to all tiers of a distributed Splunk platform deployment. See Where to install Splunk add-ons in Splunk Add-ons for more information.
This table provides a reference for installing this specific add-on to a distributed deployment of Splunk Enterprise:
| Splunk platform instance type | Supported | Required | Actions required / Comments |
|---|---|---|---|
| Search heads | Yes | Yes | Install this add-on to all search heads where Linux knowledge management is required. |
| Indexers | Yes | Conditional | Not required if you use heavy forwarders to collect data because the parsing operations occur on the heavy forwarders. Required if you use universal or light forwarders to collect data. |
| Heavy forwarders | Yes | Conditional | This add-on supports forwarders of any type for data collection. Not required if you use universal/light forwarders to collect data. Required if you use heavy forwarders to collect data. |
| Universal forwarders | Yes | Conditional | This add-on supports forwarders of any type for data collection. Not required if you use universal/light forwarders to collect data. Required if you use heavy forwarders to collect data. |
| Light forwarders | Yes | Conditional | This add-on supports forwarders of any type for data collection. Not required if you use universal/light forwarders to collect data. Required if you use heavy forwarders to collect data. |
Distributed deployment feature compatibility¶
This table describes the compatibility of this add-on with Splunk distributed deployment features:
| Distributed deployment feature | Supported | Actions required |
|---|---|---|
| Search head clusters | Yes | Disable add-on visibility on search heads. You can install this add-on on a search head cluster for all search-time functionality, but configure inputs on forwarders to avoid duplicate data collection. Before installing this add-on to a cluster, make the following changes to the add-on package: 1. Remove the eventgen.conf file and all files in the samples folder. |
| Indexer Clusters | Yes | Before installing this add-on to a cluster, make the following changes to the add-on package: - Remove the eventgen.conf file and all files in the samples folder. |
| Deployment Server | No | Supported for deploying unconfigured add-ons only. Using a deployment server to deploy the configured add-on to multiple forwarders acting as data collectors causes data duplication. The add-on uses the credential vault to secure your credentials, and this credential management solution is incompatible with the deployment server. |
Installation walkthroughs¶
The Splunk Add-Ons manual includes an Installing add-ons guide that helps you successfully install any Splunk-supported add-on to your Splunk platform.
For a walkthrough of the installation procedure, follow the link that matches your deployment scenario:
Ended: Installation
Configuration ↵
Configure CollectD to send data to the Splunk Add-on for Linux¶
The Splunk Add-on for Linux depends on data sent from CollectD to the Splunk HTTP Event Collector (HEC) or a TCP input. CollectD is a daemon which includes a rich set of plugins for gathering system and application performance metrics. The following picture illustrates how CollectD gathers data from the Linux host (as CollectD client) and sends data to Splunk (as CollectD server).

You can customize your CollectD deployment based on your needs and environment. You can configure the CollectD client and CollectD server on the same Linux host, or you can configure several CollectD clients to send data to a single CollectD server.
Download and install CollectD¶
Prerequisite
Review the hardware and software requirements for the Splunk Add-on for Linux. See Hardware and software requirements.
- Go to https://collectd.org/download.html and download CollectD.
- Follow the instructions from https://collectd.org/wiki/index.php/First_steps to install CollectD.
Configure CollectD for Linux¶
You must configure CollectD to collect data and send the data to Splunk. The default location for collectd.conf is /etc/collectd.conf or /etc/collectd/collectd.conf.
See the CollectD manpage to learn more about collectd.conf.
Configure CollectD client to collect data from Linux¶
| Data collected | Plugin in CollectD | Configuration suggestion |
|---|---|---|
| CPU metrics | Plugin cpu Enable the plugin by removing the # in front of the plugin line.Example: change #LoadPlugin cpu to LoadPlugin cpu. |
<Plugin cpu># ReportByCpu true# ReportByState trueValuesPercentage true</Plugin> |
| Memory metrics | Plugin memory | <Plugin memory>ValuesAbsolute trueValuesPercentage true</Plugin> |
| Swap metrics | Plugin swap | <Plugin swap>ReportByDevice true# ReportBytes true# ValuesAbsolute trueValuesPercentage true</Plugin> |
| VMEM metrics | Plugin vmem | <Plugin vmem>Verbose false</Plugin> |
| Mountpoint usage / FS usage | Plugin df | <Plugin df># Device "/dev/hda1"# Device "192.168.0.2:/mnt/nfs"# MountPoint "/home"# FSType "ext3"ReportByDevice true# ReportInodes false# ValuesAbsolute trueValuesPercentage true</Plugin> |
| Network interface traffic | Plugin interface | None. Use the default configuration. |
| Disk utilization | Plugin disk | None. Use the default configuration. |
| System load | Plugin load | <Plugin load>ReportRelative true</Plugin> |
| Process information | Plugin processes | <Plugin processes>ProcessMatch "all" "(.*)"</Plugin> |
| Network protocols information | Plugin protocols | None. Use the default configuration. |
| IRQ metrics | Plugin irq | None. Use the default configuration. |
| TCP connections information | Plugin tcpconns | None. Use the default configuration. |
| Thermal information | Plugin thermal | None. Use the default configuration. |
| System uptime statistics | Plugin uptime | None. Use the default configuration. |
Configure the CollectD client to send data to the CollectD server¶
Note
If you configure the CollectD client and the CollectD server on the same machine, you can skip this step.
See Plugin network in the CollectD manpage at https://collectd.org/documentation/manpages/collectd.conf.html for information on how to configure the Plugin network. See Networking introduction on the CollectD Wiki at https://collectd.org/documentation/ for a detailed walkthrough.
Configure the CollectD server to send data to Splunk¶
Plugin write_http and Plugin write_graphite submit values to Splunk. Plugin write_http sends data via HTTP and encoding metrics with JSON, and Plugin write_graphite writes data to Graphite via TCP.
Configure plugin write_http¶
If you want to send Linux performance metrics data to Splunk in JSON format via HTTP, configure URL, Header and Format as follows:
| Field name | Description | Syntax | Example |
|---|---|---|---|
| URL | URL to which the values are submitted to. The values for IP, Port, and Token Value must be the same as the values you define for the HEC inputs.See Configure HEC inputs for the Splunk Add-on for Linux. |
URL “https://Splunk Server IP:Port Number/services/collector/raw?channel=Token Value“ | URL “https://10.66.104.127:8088/services/collector/raw?channel=693E90D4-91A5-49A3-99B1-CFE8828A0711” |
| Header | A HTTP header to add to the request. | Header “Authorization: Splunk Token Value“ | Header “Authorization: Splunk 693E90D4-91A5-49A3-99B1-CFE8828A0711” |
| Format | The data format. | Format “JSON” | Format “JSON” |
Example
LoadPlugin write_http
<Plugin write_http>
<Node "node-http-1">
URL "https://10.66.104.127:8088/services/collector/raw?channel=693E90D4-91A5-49A3-99B1-CFE8828A0711"
Header "Authorization: Splunk 693E90D4-91A5-49A3-99B1-CFE8828A0711"
Format "JSON"
Metrics true
StoreRates true
</Node>
</Plugin>
Configure plugin write_graphite¶
If you want to send Linux performance metrics data to Splunk in Graphite format, configure plugin write_graphite as follows:
- Set
AlwaysAppendDStotrue. - Set
SeparateInstancestofalse. - Make sure the values for
HostandPortare the same as the values you define for the TCP inputs. See Configure TCP inputs for the Splunk Add-on for Linux.
Note
If dots (.) are used in the metric name (including prefix, EscapeCharacter, hostname, and postfix), Splunk cannot recognize the key-value pair in the data.
Example
LoadPlugin write_graphite
<Plugin write_graphite>
<Node "node-graphite-1">
Host "10.66.108.127"
Port "2104"
Protocol "tcp"
EscapeCharacter "_"
AlwaysAppendDS true
SeparateInstances false
</Node>
</Plugin>
Configure HEC inputs for the Splunk Add-on for Linux¶
HTTP Event Collector (HEC) is an endpoint that lets you send application events to your Splunk deployment using the HTTP or Secure HTTP (HTTPS) protocols. CollectD sends data to the Splunk Add-on for Linux in JSON format.
Note
Paid Splunk Cloud customers must open a ticket with Splunk Support to enable HEC.
Configure HEC inputs for Linux using Splunk Web¶
- Click Settings > Data Inputs > HTTP Event Collector.
- Define a new data input and set the source type to
linux:collectd:http:json. The mapping and dashboard panels for Splunk IT Service Intelligence (ITSI) are dependent on this source type.
For more information on how to configure data inputs, see Configure your inputs.
For more detailed guidelines on how to configure HEC inputs, see Set up and use HTTP Event Collection.
If you need to validate your data input configuration, see Validate data collection.
Configure HEC inputs to use metrics data¶
If you want to collect metrics data, you must configure Splunk to index metrics and configure the HEC inputs to use the metrics source type. You can do this by either editing the props.conf file directly, or by setting the source type in Splunk Web.
For more information on how to configure data inputs, see Configure your inputs.
If you need to validate your data input configuration, see Validate data collection.
Edit the props.conf file to set the metrics source type¶
- Configure Splunk to create the metrics indexes. See Create metrics indexes.
- Add the following stanza to
$SPLUNK_HOME/etc/apps/Splunk_TA_Linux/local/props.conf:[linux:collectd:http:metrics] METRICS_PROTOCOL = COLLECTD_HTTP - Restart Splunk.
- Go to Splunk Web.
- Click Settings > Data Inputs > HTTP Event Collector.
- Define a new data input and set the source type to
linux:collectd:http:metrics. The mapping and dashboard panels for Splunk IT Service Intelligence (ITSI) are dependent on this source type.
Use Splunk Web to set the metrics source type¶
Configure TCP inputs in CollectD for the Splunk Add-on for Linux¶
You can collect data from any TCP port. You can use this method to capture data from various network services such as syslog or netcat.
You cannot collect metrics by TCP. To collect metrics, you must use HEC. See Configure HEC inputs to use metrics data.
Configure the TCP inputs for Linux using Splunk Web¶
- Click Settings > Data Inputs > TCP.
- Define a new data input and set the source type to
linux:collectd:graphite. The mapping and dashboard panels are dependent on this source type.
For more information on how to configure data inputs, see Configure your inputs.
For more detailed guidelines on how to get data inputs from TCP ports, see Get data from TCP and UDP ports.
If you need to validate your data input configuration, see Validate data collection.
Configure AuditD to send data to the Splunk Add-on for Linux¶
AuditD is a default linux daemon for audit data generation. The AuditD daemon must be in the running state to generate AuditD logs.
You can collect data by monitoring the audit logs, or by collecting data through TCP.
Configure AuditD to collect data¶
You must configure AuditD to collect data and send the data to Splunk. The default location for auditd.conf is /etc/audit/auditd.conf.
Configure the property log_format with option RAW or ENRICHED. If set to RAW, the audit records will be stored in a format exactly as the kernel sends it. The ENRICHED option will resolve all uid, gid, syscall, architecture, and socket address information before writing the event to disk.
Note
Set log_format=ENRICHED to allow proper CIM mapping of auditd event data.
See the AuditD manpage to learn more about auditd.conf.
Collect data from the audit logs¶
- Click Settings > Data Inputs > Files & directories.
- Define a new data input and set the source type to
linux:audit.
For more information on how to configure data inputs, see Configure your inputs.
If you need to validate your data input configuration, see Validate data collection.
Collect data from a TCP port¶
- Click Settings > Data Inputs > TCP.
- Define a new data input and set the source type to
linux:audit.
For more information on how to configure data inputs, see Configure your inputs.
If you need to validate your data input configuration, see Validate data collection.
Ended: Configuration
Troubleshooting ↵
Troubleshoot the Splunk Add-on for Linux¶
General troubleshooting¶
For troubleshooting tips that you can apply to all add-ons, see Troubleshoot add-ons in Splunk Add-ons. For additional resources, see Support and resource links for add-ons in Splunk Add-ons.
Cannot launch add-on¶
This add-on does not have views and is not intended to be visible in Splunk Web. If you are trying to launch or load views for this add-on and you are experiencing results you do not expect, turn off visibility for the add-on.
For more details about add-on visibility and instructions for turning visibility off, see Check if the add-on is intended to be visible or not in the Splunk Add-ons Troubleshooting topic.
Validate data collection¶
Validate the data inputs to make sure that you are ingesting the data you expect.
- HEC:
Search
sourcetype=linux:collectd:http:json index=<collectd-source-index>
- HEC with metrics data:
Search
mstats count(_value) where metric_name=* AND index=<metrics index name> by metric_name
- TCP:
Search
sourcetype=linux:collectd:graphite index=<collectd-source-index>
- AuditD:
Search
sourcetype=linux:audit index=<auditd-source-index>
The default search uses index="main".
Audit data not collected¶
Create a new TCP data input configuration and make sure the source type is set to to linux:audit.
If you are collecting audit data in a syslog source type using TCP, then you must assign the correct source type.
- Add the following stanza to
$SPLUNK_HOME/etc/apps/Splunk_TA_Linux/local:[syslog] TRANSFORMS-linux_syslog = linux_syslog_audit - Add the following stanza to
$SPLUNK_HOME/etc/apps/Splunk_TA_Linux/local/props.conf:[linux_syslog_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit - Restart Splunk.
Ended: Troubleshooting
Reference ↵
Source types for the Splunk Add-on for Linux¶
The Splunk Add-on for Linux provides the index-time and search-time knowledge for CollectD and AuditD.
linux:collectd:http:jsonis for performance metrics sent to the Splunk platform through HEC in JSON formatlinux:collectd:graphiteis for performance metrics sent to the Splunk platform through TCP in Graphite formatlinux:collectd:http:metricsis for performance metrics sent to the Splunk platform through HEC.
CollectD data works with ITSI data models.
Note
The two source types linux:collectd:http:json and linux:collectd:graphite collect the same data from CollectD. However, the collection method and the data format are different for these two source types.
Send data in JSON format through HEC. The data collected in JSON format contains more information than Graphite provides. Using JSON through HEC improves knowledge mapping to the Splunk IT Service Intelligence (ITSI) data model for Linux KPIs. For example, a network interface measurement in Graphite format is presented as two strings:
localhost-234.interface-eno16777984.if_octets.tx 573.300503 1481692948localhost-234.interface-eno16777984.if_octets.rx 783.017354 1481692948
The same measurement in JSON format is presented as a single event:
{"values":[783.017354110699,573.300503324745],"dstypes":["derive","derive"],"dsnames":["rx","tx"],"time":14816 92948.296,"interval":60.000,"host":"localhost-234","plugin":"interface","plugin_instance":"eno16777984","type":"if _octets","type_instance":"","meta":{"network:received":true}}
AuditD data works with CIM data models.
| Source type | Event type | CIM data modules |
|---|---|---|
linux:audit |
linux_audit_account_change |
Change |
linux:audit |
linux_audit_authentication |
Authentication |
linux:audit |
linux_audit_endpoint |
Endpoint |
linux:audit |
linux_audit_endpoint_services |
Endpoint |
Version comparisons¶
See the following sections for information on the differences between versions 1.1.0 of the Splunk Add-on for Linux and 2.1.0 of the Splunk Add-on for Linux
2.0.0 - 2.1.0¶
Field added/removed¶
| Sourcetype | type, op | Added Fields | Removed Fields |
|---|---|---|---|
['linux:audit'] |
ADD_USER adding user, add-user | src_user | |
['linux:audit'] |
ADD_USER adding user to group | src_user | |
['linux:audit'] |
DEL_USER deleting user entries, deleting user from group | src_user | |
['linux:audit'] |
USER_ACCT changing /etc/passwd; group group_2/222222, new gid: 276, changing /etc/passwd; group group_2/222222, new gid: 10, changing /etc/passwd; group group_2/222222, new gid: 191, changing /etc/passwd; group group_2/222222, new gid: 177, changing /etc/passwd; group group_2/222222, new gid: 6 | src_user, src_user_name | |
['linux:audit'] |
USER_ACCT changing /etc/passwd; group group_2/222222, new gid: 136, changing /etc/passwd; group group_2/222222, new gid: 90, changing /etc/passwd; group group_2/222222, new gid: 76, changing /etc/passwd; group group_2/222222, new gid: 167 | src_user, src_user_name | |
['linux:audit'] |
USER_ACCT changing /etc/passwd; group group_2/222222, new gid: 18, changing /etc/passwd; group group_2/222222, new gid: 266, changing /etc/passwd; group group_2/222222, new gid: 250, changing /etc/passwd; group group_2/222222, new gid: 203, changing /etc/passwd; group group_2/222222, new gid: 89, changing /etc/passwd; group group_2/222222, new gid: 62, changing /etc/passwd; group group_2/222222, new gid: 28 | src_user, src_user_name | |
['linux:audit'] |
USER_CHAUTHTOK changing uid | object, src_user | |
['linux:audit'] |
USER_STARTPAM:session_open | user_id |
| Sourcetype | type, unit | Added Fields | Removed Fields |
|---|---|---|---|
['linux:audit'] |
SERVICE_START auditd | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_START collectd | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_START systemd-timedated | tag::eventtype, service, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_START systemd-tmpfiles-clean | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_START update-notifier-download | tag::eventtype, service, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_STOP auditd | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_STOP collectd | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_STOP systemd-timedated | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_STOP systemd-tmpfiles-clean | tag::eventtype, service, user, eventtype, service_name, process_id, tag | |
['linux:audit'] |
SERVICE_STOP update-notifier-download | tag::eventtype, service, status, user, eventtype, service_name, process_id, tag |
CIM Data Model changes¶
| Sourcetype | type | Previous CIM model | New CIM model |
|---|---|---|---|
linux:audit |
SERVICE_START, SERVICE_STOP | Endpoint.Services |
Fields modified¶
| Sourcetype | type, op | Field | v2.0.0 | v2.1.0 | v2.1.1 |
| — | — | — | — | — |
| linux:audit | USER_LOGIN, login | user | unset | MAX_TIMESTAMP_LOOKAHEAD |
1.1.0 - 2.0.0¶
Field mapping comparison¶
SourceType linux:collectd:graphite
| Fields | 1.1.0 extractions | 2.0.0 extractions |
|---|---|---|
| src | centos-7-202112200858 | - |
| dest | - | centos-7-202112200858 |
| tag | oshost performance inventory storage memory network cpu os process |
oshost performance storage memory network cpu os process uptime |
| tag::eventtype | oshost performance inventory storage memory network cpu os process |
oshost performance storage memory network cpu os process uptime |
SourceType linux:collectd:http
| src | ubuntu-16-202112200858 | - |
| dest | - | ubuntu-16-202112200858 |
| mount | xvda2 devtmpfs tmpfs |
xvda1 xvda2 devtmpfs tmpfs |
| tag | oshost performance inventory storage network memory cpu os process |
oshost performance storage network memory cpu os process uptime |
| tag::eventtype | oshost performance inventory storage network memory cpu os process |
oshost performance storage network memory cpu os process uptime |
SourceType linux:audit
| Fields | 1.1.0 extractions | 2.0.0 extractions |
|---|---|---|
| src_user | - | admin centos ec2-user ubuntu |
| object | - | user_2 group_1 user1 group_2 group_3 unknown(111111) |
| process_path | - | /bin/sh |
| user_name | - | user_2 admin user1 1000 centos ec2-user ubuntu |
| object_category | - | user group |
| vendor_product | - | Linux Audit |
| user_id | - | 1000 4294967295 0 |
| process_id | - | 10023 21366 21849 7021 7038 7286 8744 887 908 9102 9749 |
| src_user_name | - | admin centos ec2-user ubuntu |
| signature_id | - | 10557 10617 10653 10687 10783 10878 12989 13129 13264 13405 1774 1788 1833 1930 1952 1976 2108 2129 2130 2150 2153 2156 2174 2176 2198 2297 2321 2408 2500 2586 2620 2732 2817 2907 2911 2944 2999 3032 3498 3586 7346 7383 7390 7395 7432 7433 7466 7517 7523 7597 7648 7739 9651 9679 9760 |
| result | - | success |
| process_name | - | sh |
| process | - | /bin/sh -c echo BECOME-SUCCESS-dhtumxxtkcrxahvfesdyntewfpidbinb ; /usr/libexec/platform-python /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.935285-85400-258159253623075/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-qcssdifrutuskiaslrjulauiloilshzb ; /usr/libexec/platform-python /home/centos/.ansible/tmp/ansible-tmp-1640006080.967474-85356-163360540330224/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-rxaezcaecyjzdwtcgzrnvrczebzdteux ; /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1640006080.997857-85362-116590561680052/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-tgrgbwfdpnprhqxomgoraaqxyjojcwpx ; /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1640006080.972403-85358-162199036065355/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-txxfnpggclncjhsqgaknpbzspzithxec ; /usr/bin/python /home/centos/.ansible/tmp/ansible-tmp-1640006080.94751-85355-75901484427407/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-vqovioujwxvzhqadkoplmolqagqhioeg ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006088.9018528-85444-80063304780089/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-wfevtrhxsnwdxtnmjdydxdhxmbnufgat ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006088.885395-85443-191111467693802/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-xzisnyvrwjlborofkkiquvyglfrzpors ; /usr/bin/python3 /home/admin/.ansible/tmp/ansible-tmp-1640006080.991161-85360-123082352417003/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zcnurlegkdamxdgcdjbhhfufazcqmiud ; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.904531-85399-214089969652421/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zrutaialratlpiralyjuydqmarrwajeq ; /usr/bin/python3.6 /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.962355-85403-198294693487419/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zzaswnvpdtgsqqpwnkaandhuaxwdxfik ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006085.35588-85411-67206222024209/AnsiballZ_group.py |
| tag::action | - | success failure |
| change_type | - | AAA |
| process_exec | - | /bin/sh |
| signature | - | USER_LOGIN CRED_ACQ LOGIN USER_START |
| object_id | - | 111111 22223 11111 333333 |
| src_user_id | - | 1000 |
| dest | - | splunk |
| process_current_directory | - | /home/admin /home/ec2-user /home/ubuntu /home/centos |
| linux_ev_ch_mgmt_user | - | admin centos ec2-user ubuntu |
| src_ip | - | 0.0.0.0 127.0.0.1 |
| reason | - | invalid user |
| action | success 1 failed |
modified success created deleted allowed failure |
| tag | account change authentication privileged |
account change authentication success error failure process report |
| tag::eventtype | account change authentication privileged |
account change authentication error process report |
| app | /usr/sbin/sshd /usr/sbin/useradd /usr/sbin/userdel /usr/sbin/groupdel /usr/bin/sudo /usr/sbin/groupadd /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/cron |
/usr/sbin/sshd /usr/bin/sudo /usr/sbin/cron |
| status | success 1 failed |
success |
| eventtype | linux_audit_account_change linux_audit_authentication linux_audit_privileged |
linux_audit_account_change linux_audit_authentication linux_audit_endpoint |
| command | /bin/sh -c echo BECOME-SUCCESS-dhtumxxtkcrxahvfesdyntewfpidbinb ; /usr/libexec/platform-python /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.935285-85400-258159253623075/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-qcssdifrutuskiaslrjulauiloilshzb ; /usr/libexec/platform-python /home/centos/.ansible/tmp/ansible-tmp-1640006080.967474-85356-163360540330224/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-rxaezcaecyjzdwtcgzrnvrczebzdteux ; /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1640006080.997857-85362-116590561680052/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-tgrgbwfdpnprhqxomgoraaqxyjojcwpx ; /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1640006080.972403-85358-162199036065355/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-txxfnpggclncjhsqgaknpbzspzithxec ; /usr/bin/python /home/centos/.ansible/tmp/ansible-tmp-1640006080.94751-85355-75901484427407/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-vqovioujwxvzhqadkoplmolqagqhioeg ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006088.9018528-85444-80063304780089/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-wfevtrhxsnwdxtnmjdydxdhxmbnufgat ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006088.885395-85443-191111467693802/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-xzisnyvrwjlborofkkiquvyglfrzpors ; /usr/bin/python3 /home/admin/.ansible/tmp/ansible-tmp-1640006080.991161-85360-123082352417003/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zcnurlegkdamxdgcdjbhhfufazcqmiud ; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.904531-85399-214089969652421/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zrutaialratlpiralyjuydqmarrwajeq ; /usr/bin/python3.6 /home/ec2-user/.ansible/tmp/ansible-tmp-1640006084.962355-85403-198294693487419/AnsiballZ_group.py /bin/sh -c echo BECOME-SUCCESS-zzaswnvpdtgsqqpwnkaandhuaxwdxfik ; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1640006085.35588-85411-67206222024209/AnsiballZ_group.py |
/usr/sbin/useradd /usr/sbin/userdel /usr/sbin/groupdel /usr/sbin/groupadd /usr/sbin/usermod /usr/sbin/groupmod |
| user | root user_2 group_2 28696E76616C6964207573657229 (unknown) ec2-user centos user1 admin |
admin user_2 centos ec2-user ubuntu user1 1000 unset root |
| src | splunk | 0.0.0.0 hostname 127.0.0.1 |
Event Type comparison¶
| SourceType | EventType | 1.1.0 search term | 2.0.0 search term |
|---|---|---|---|
| linux:audit | linux_audit_authentication | linux:audit (type=USER_LOGIN OR type=USER_CMD OR type=GRP_AUTH OR type=USER_AUTH) | linux:audit type IN (“LOGIN”, “USER_LOGIN”, “USER_START”, “CRED_ACQ”) |
| linux:audit | linux_audit_privileged | eventtype=linux_audit_authentication type=USER_CMD OR acct=root | - |
| linux:audit | linux_audit_account_change | sourcetype=linux:audit (type=ADD_* OR type=CHGRP_ID OR type=CHUSER_ID OR type=GRP_MGMT OR type=USER_MGMT OR type=DEL_*) | linux:audit type IN (“ADD_GROUP”, “DEL_GROUP”, “GRP_MGMT”, “USER_ACCT”, “ADD_USER”, “DEL_USER”, “USER_MGMT”, “USER_CHAUTHTOK”) |
DM comparison¶
| SourceType | EventType | 1.1.0 DM | 2.0.0 DM |
|---|---|---|---|
| linux:audit | linux_audit_anomalies | Intrusion Detection, Alerts | |
| linux:audit | linux_audit_account_change | Change Analysis | Change |
| linux:audit | linux_audit_privileged | Authentication |