Table of Contents
Overview ↵
Introduction to the Splunk Add-on for Amazon Web Services¶
Version | 7.8.0 |
Supported vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), CloudTrail Lake, Inspector, Kinesis, S3, VPC Flow Log, Transit Gateway Flow Logs, Billing Cost and Usage Report, Amazon Security Lake, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
CIM-compliant vendor products | AWS CloudTrail, AWS CloudWatch, AWS Config and AWS Config Rules, Amazon Inspector, Amazon Virtual Private Cloud, AWS Transit Gateway, AWS Security Hub findings events |
Add-on has a web UI | Yes. This add-on contains views for configuration. |
Use the Splunk Add-on for Amazon Web Services (AWS) to collect performance, billing, raw or JSON data, and IT and security data on Amazon Web Service products using either a push-based (Amazon Kinesis Firehose) or pull-based (API) collection method.
This add-on provides modular inputs and CIM-compatible knowledge to use with other Splunk apps, such as the Splunk App for AWS, Splunk Enterprise Security, and Splunk IT Service Intelligence.
See Use cases for the Splunk Add-on for AWS for more information.
Download the Splunk Add-on for Amazon Web Services from Splunkbase. See Deploy the Splunk Add-on for AWS for information about installing and configuring this add-on.
See Release notes for the Splunk Add-on for AWS for a summary of new features, fixed issues, and known issues.
See Questions related to Splunk Add-on for Amazon Web Services on the Splunk Community page.
Use cases for the Splunk Add-on for AWS¶
Use the Splunk Add-on for AWS to collect data on Amazon Web Services. The Splunk Add-on for AWS offers pretested add-on inputs for four main use cases, but you can create an input manually for a miscellaneous Amazon Web Service. See Configure miscellaneous inputs for the Splunk Add-on for AWS.
See the following table for use cases and corresponding add-on collection methods:
Use case |
Add-on inputs |
---|---|
Use the Splunk Add-on for AWS to calculate the cost of your Amazon Web Service usage over different lengths of time. |
|
Use the Splunk Add-on for AWS to push CloudTrail log data to the Splunk platform. CloudTrail allows you to audit your AWS account. |
|
Use the Splunk Add-on for AWS to push IT and performance data on your Amazon Web Service into the Splunk platform. |
|
Use the Splunk Add-on for AWS to push security data on your Amazon Web Service into the Splunk platform. |
|
Consider push-based versus pull-based data collection for the Splunk Add-on for AWS¶
The Splunk Add-on for Amazon Web Services supports both push-based and pull-based data collection for the following vendor products: Amazon Kinesis Firehose data, CloudWatch, VPC Flow Logs, Transit Gateway Flow Logs, AWS CloudTrail, GuardDuty, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events.
See the following table to understand the data collection differences:
Push Data | Pull Data |
---|---|
For high volume, streaming data. | For low volume, rarely changing data. |
If high availability and scale are required for your deployment. | For normal availability and scale. |
Sends data directly to indexers so you do not need to manage forwarders. | Unless your deployment is in Splunk Cloud, you must manage the forwarders. |
Source types for the Splunk Add-on for AWS¶
The Splunk Add-on for Amazon Web Services (AWS) provides the index-time and search-time knowledge for alerts, events, and performance metrics. Source types and event types map the Amazon Web Service data to the Splunk Common Information Model (CIM)
See Troubleshoot the Splunk Add-on for AWS to find source types for internal logs.
See the following table for source types and event types for AWS data mapping:
Pull-based API data collection sourcetypes¶
Data type |
Source type |
Description |
Supported input types |
Data models |
---|---|---|---|---|
Billing |
|
|
Billing (Cost and Usage Report) Billing (Legacy) |
|
CloudFront Access Logs |
|
Represents CloudFront Access Logs. |
SQS-based S3 Generic S3 Incremental S3 |
|
CloudTrail |
|
Represents AWS API call history from the AWS CloudTrail service. |
SQS-based S3 CloudTrail Generic S3 Incremental S3 |
|
CloudWatch |
|
Represents performance and billing metrics from the AWS CloudWatch service. |
CloudWatch |
|
CloudWatch Logs |
|
|
Kinesis CloudWatch Logs |
|
Config |
|
|
SQS-based S3 AWS Config |
|
Config Rules |
|
Represents compliance details, compliance summary, and evaluation status of your AWS Config Rules. |
Config Rules |
|
Delimited Files |
|
Represents delimited files (CSV, PSV, TSV file extensions, Single Space Seperated files). Provides index-time timestamp for events. |
SQS-based S3 Generic S3 |
|
ELB Access Logs |
|
Represents ELB Access Logs. |
SQS-based S3 Generic S3 Incremental S3 |
|
Inspector |
|
|
Inspector Inspector (v2) |
|
Metadata |
|
Descriptions of your AWS EC2 instances, reserved instances, and EBS snapshots. |
Metadata |
|
S3 |
|
Represents generic log data from your S3 buckets. |
Generic S3 Incremental S3 SQS-based S3 |
|
S3 Access Logs |
|
Represents S3 Access Logs. |
SQS-based S3 Generic S3 Incremental S3 |
|
Amazon Security Lake |
aws:asl |
|
SQS-based S3 |
|
SQS |
|
Represents generic data from SQS. |
SQS |
|
VPC Flow Logs |
|
Represents VPC Flow Logs. |
SQS-based S3 Kinesis |
|
CloudTrail Lake |
|
Represents JSON data from cloudtrail lake event data store. |
CloudTrail Lake |
|
GuardDuty Events |
|
Represents GuardDuty Events. |
Cloudwatch Logs |
|
Transit Gateway Flow Logs |
|
Represents Transit Gateway Flow Logs. |
SQS-based S3 |
|
Push-based Amazon Kinesis Firehose data collection sourcetypes¶
The Splunk Add-on for Amazon Web Services provides knowledge management for the following Amazon Kinesis Firehose source types:
Data source |
Source type |
CIM compliance |
Description |
---|---|---|---|
CloudTrail events |
|
AWS API call history from the AWS CloudTrail service, delivered
as CloudWatch events. For CloudTrail events embedded within CloudWatch
events, override the source name optional field
|
|
CloudWatch events |
|
None |
Data from CloudWatch. You can extract CloudTrail events embedded within CloudWatch events with this sourcetype as well. |
GuardDuty events |
|
GuardDuty events from CloudWatch. For GuardDuty events embedded
within CloudWatch events, override the source name optional field with
|
|
Amazon Identity and Access Management (IAM) Access Analyzer events |
|
None |
Using Eventbridge event bus to ingest the events, set the source
to |
Amazon Kinesis Firehose JSON data |
|
None |
Any JSON formatted Firehose data. |
Amazon Kinesis Firehose text data |
|
None |
Firehose raw text format. |
AWS Security Hub |
|
Collect events from AWS Security Hub. For AWS Security Hub events
embedded within AWS CloudWatch events, override the source name optional
field with |
|
VPC Flow Logs |
|
VPC Flow Logs from CloudWatch. When ingesting CloudWatch logs,
set the Lambda buffering size to 1 MB. See data
transformation flow in the Amazon Kinesis Firehose documentation for
more information. |
|
Transit Gateway Flow Logs |
|
Collect Transit Gateway Flow Logs through HEC. |
Hardware and software requirements for the Splunk Add-on for AWS¶
To install and configure the Splunk Add-on for Amazon Web Services (AWS), you must have admin or sc_admin role permissions.
Version 7.0.0 of the Splunk Add-On for AWS added support for ingesting data from the Amazon Security Lake service. If you use the Splunk Add-on for Amazon Security Lake to ingest Amazon Security Lake data, you must remove it from your Splunk platform deployment before installing version 7.0.0 or higher of this add-on, as objects in the Splunk Add-on for Amazon Security Lake conflict with the Splunk Add-On for AWS.
Splunk platform requirements¶
There are no Splunk platform requirements specific to the Splunk Add-on for AWS.
For Splunk Enterprise system requirements, see System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual.
For information about installation locations and environments, see Install the Splunk Add-on for AWS.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
AWS account prerequisites¶
To set up your AWS configuration to work with your Splunk platform instance, make sure you have the following AWS account privileges:
- A valid AWS account with permissions to configure the AWS services that provide your data.
- Permission to create Identity and Access Management (IAM) roles and users. This lets you set up AWS account IAM roles or Amazon Elastic Compute Cloud (EC2) IAM roles to collect data from your AWS services.
When configuring your AWS account to send data to your Splunk platform
deployment, the best practice is that you should not allow "*"
(all
resource) statements as part of action elements. This level of access
could potentially grant unwanted and unregulated access to anyone given
this policy document setting. The best practice is to write a refined
policy describing the specific action allowed by specific users or
specific accounts, or required by the specific policy holder.
For more information, see the Basic examples of Amazon SQS policies topic in the Amazon Simple Queue Service Developer Guide.
AWS region limitations¶
The Splunk Add-on for AWS supports all services offered by AWS in each region. To learn which worldwide geographic regions support which AWS services, see the Region Table in the AWS global infrastructure documentation.
In the AWS China region, the add-on supports only the services that AWS supports in that region. For an up-to-date list of what products and services are supported in this region, see https://www.amazonaws.cn/en/products/.
For an up-to-date list of what services and endpoints are supported in AWS GovCloud region, see https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-services.html.
Network configuration requirements¶
The Splunk Add-on for AWS makes REST API calls using HTTPS on port 443. Data inputs for this add-on use large amounts of memory. See Sizing, performance, and cost considerations for the Splunk Add-on for AWS for more information.
AWS encryption requirements¶
Amazon Web Services supports the following server-side encryption types:
- Server-side encryption with Amazon S3-managed encryption keys (SSE-S3). For SSE-S3 configurations, the unique key is used for encrypting each object)
- Server-side encryption with AWS Key Management Service (SSE-KMS). SSE-KMS will manage encryption. AWS will manage the master key.
- Server-side encryption with customer-provided encryption keys (SSE-C). KMS service will manage encryption/ The client needs to provide a custom master key.
The Splunk Add-on for AWS supports all server-side encryptions. Client-side encryption is not supported. Server side encryption is handled by AWS. AWS SDK for Python does not support client-side encryption.
Requirements For Amazon Kinesis Firehose¶
The Splunk Add-on for Amazon Web Services requires specific configurations for Amazon Kinesis Firehose push-based data collection. See What is Amazon Kinesis Firehose? in the AWS documentation.
SSL requirements¶
Amazon Kinesis Firehose requires the HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint.
You must use a trusted CA-signed certificate. Self-signed certificates are not supported.
If you are sending data directly into Splunk Enterprise indexers in your own internal network or AWS VPC, a CA-signed certificate must be installed to each of the indexers. If you are using an ELB to send data, you must install a CA-signed certificate on the load balancer.
Paid Splunk Cloud users are provided an ELB with a proper CA-signed certificate and a hostname for each stack. For ELB users on distributed Splunk Enterprise deployments, see Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services topic in this manual for information on how to configure an ELB with proper SSL certifications.
Event formatting requirements¶
The Splunk Add-on for Amazon Web Services also supports data collection
using either of the two HTTP Event Collector endpoint types: raw and
event. If you collect data using the raw endpoint, no special formatting
is required for most source types. The aws:cloudwatchlogs:vpcflow
contains a nested events JSON array that cannot be parsed by the HTTP
Event Collector. Prepare this data for the Splunk platform using an AWS
Lambda function that extracts the nested JSON events correctly into a
newline-delimited set of events. All other source types can be sent
directly to the raw endpoint without any preprocessing.
See the example Kinesis Firehose lambda function to remove the JSON wrapper around VPC Flow Logs before it reaches Splunk: https://github.com/ranjitkk/ranjit_aws_repo_public/blob/main/Splunk_FlowLogs_Firehose_processor.py.
If you collect data using the event endpoint, format your events into the JSON format expected by HTTP Event Collector before sending them from Amazon Kinesis Firehose to the Splunk platform. You can apply an AWS Lambda blueprint to preprocess your events into the JSON structure and set event-specific fields, which allows you greater control over how your events are handled by the Splunk platform. For example, you can create and apply a Lambda blueprint that sends data from the same Firehose stream to different indexes depending on event type.
For information about the required JSON structure, see Format events for HTTP Event Collector.
Sizing, performance, and cost considerations for the Splunk Add-on for AWS¶
Before you configure the Splunk Add-on for Amazon Web Services (AWS), review these sizing, performance, and cost considerations.
General¶
See the following table for the recommended maximum daily indexing volume on a clustered indexer for different AWS source types. This information is based on a generic Splunk hardware configuration. Adjust the number of indexers in your cluster based on your actual system performance. Add indexers to a cluster to improve indexing and search retrieval performance. Remove indexers to a cluster to avoid within-cluster data replication traffic.
Source type | Daily indexing volume per indexer (GB) |
---|---|
aws:cloudwatchlogs:vpcflow | 25-30 |
aws:s3:accesslogs | 80- 20 |
aws:cloudtrail | 150-200 |
aws:billing | 50- 00 |
These sizing recommendations are based on the Splunk platform hardware configurations in the following table. You can also use the System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual as a reference.
Splunk platform type | CPU cores | RAM | EC2 instance type |
---|---|---|---|
Search head | 8 | 16 GB | c4.xlarge |
Indexer | 16 | 64 GB | m4.4xlarge |
Input configuration screens require data transfer from AWS to populate the services, queues, and buckets available to your accounts. If your network to AWS is slow, data transfers might be slow to load. If you encounter timeout issues, you can manually type in resource names.
Performance for the Splunk Add-on for AWS data inputs¶
The rate of data ingestion for this add-on depends on several factors: deployment topology, number of keys in a bucket, file size, file compression format, number of events in a file, event size, and hardware and networking conditions.
See the following tables for measured throughput data achieved under certain operating conditions. Use the information to optimize the Splunk Add-on for AWS add-on in your own production environment. Because performance varies based on user characteristics, application usage, server configurations, and other factors, specific performance results cannot be guaranteed. Contact Splunk Support for accurate performance tuning and sizing.
The Kinesis input for the Splunk Add-on for AWS has its own performance data. See Configure Kinesis inputs for the Splunk Add-on for AWS.
Reference hardware and software environment¶
Throughput data and conclusions are based on performance testing using Splunk platform instances (dedicated heavy forwarders and indexers) running on the following environment:
Instance type | M4 Double Extra Large (m4.4xlarge) |
Memory | 64 GB |
Compute Units (ECU) | 53.5 |
vCPU | 16 |
Storage (GB) | 0 (EBS only) |
Arch | 64-bit |
EBS optimized (max bandwidth) | 2000 Mbps |
Network performance | High |
The following settings are configured in the outputs.conf file on the heavy forwarder:
useACK = true
maxQueueSize = 15MB
Measured performance data¶
The throughput data is the maximum performance for each single input achieved in performance testing under specific operating conditions and is subject to change when any of the hardware and software variables changes. Use this data as a rough reference only.
Single-input max throughput¶
Data input |
Source type |
Max throughput (KBs) |
Max EPS (events) |
Max throughput (GB/day) |
---|---|---|---|---|
Generic S3 |
aws:elb:accesslogs |
17,000 |
86,000 |
1,470 |
Generic S3 |
aws:cloudtrail |
11,000 |
35,000 |
950 |
Incremental S3 |
aws:elb:accesslogs |
11,000 |
43,000 |
950 |
Incremental S3 |
aws:cloudtrail |
7,000 |
10,000 |
600 |
SQS-based S3 |
aws:elb:accesslogs |
12,000 |
50,000 |
1,000 |
SQS-based S3 |
aws:elb:accesslogs |
24,000 |
100,000 |
2,000 |
SQS-based S3 |
aws:cloudtrail |
13,000 |
19,000 |
1,100 |
CloudWatch logs [1] |
aws:cloudwatchlog:vpcflow |
1,000 |
6,700 |
100 |
CloudWatch |
aws:cloudwatch |
240 (Metrics) |
NA |
NA |
CloudTrail |
aws:cloudtrail |
5,000 |
7,000 |
400 |
Kinesis |
aws:cloudwatchlog:vpcflow |
15,000 |
125,000 |
1,200 |
SQS |
aws:sqs |
N/A |
160 |
N/A |
[1] API throttling error occurs if input streams are greater than 1,000.
Multi-inputs max throughput¶
The following throughput data was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment.
Consolidate AWS accounts during add-on configuration to reduce CPU usage and increase throughput performance.
Data input |
Source type |
Max throughput (KBs) |
Max EPS (events) |
Max throughput (GB/day) |
---|---|---|---|---|
Generic S3 |
aws:elb:accesslogs |
23,000 |
108,000 |
1,980 |
Generic S3 |
aws:cloudtrail |
45,000 |
130,000 |
3,880 |
Incremental S3 |
aws:elb:accesslogs |
34,000 |
140,000 |
2,930 |
Incremental S3 |
aws:cloudtrail |
45,000 |
65,000 |
3,880 |
SQS-based S3 [1] |
aws:elb:accesslogs |
35,000 |
144,000 |
3,000 |
SQS-based S3 [1] |
aws:elb:accesslogs |
42,000 |
190,000 |
3,600 |
SQS-based S3 [1] |
aws:cloudtrail |
45,000 |
68,000 |
3,900 |
CloudWatch logs |
aws:cloudwatchlog:vpcflow |
1,000 |
6,700 |
100 |
CloudWatch (ListMetric) |
aws:cloudwatch |
240 (metrics/s) |
NA |
NA |
CloudTrail |
aws:cloudtrail |
20,000 |
15,000 |
1,700 |
Kinesis |
aws:cloudwatchlog:vpcflow |
18,000 |
154,000 |
1,500 |
SQS |
aws:sqs |
N/A |
670 |
N/A |
[1] Performance testing of the SQS-based S3 input indicates that optimal performance throughput is reached when running four inputs on a single heavy forwarder instance. To achieve higher throughput performance beyond this bottleneck, you can further scale out data collection by creating multiple heavy forwarder instances each configured with up to four SQS-based S3 inputs to concurrently ingest data by consuming messages from the same SQS queue.
Max inputs benchmark per heavy forwarder¶
The following input number ceiling was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment. CPU and memory resources were utilized to their fullest.
It is possible to configure more inputs than the maximum number indicated in the table if you have a smaller event size, fewer keys per bucket, or more available CPU and memory resources in your environment.
Data input | Sourcetype | Format | Number of keys/bucket | Event size | Max inputs |
---|---|---|---|---|---|
S3 | aws:s3 | zip, syslog | 100,000 | 100 B | 300 |
S3 | aws:cloudtrail | gz, json | 1,300,000 | 1 KB | 30 |
Incremental S3 | aws:cloudtrail | gz, json | 1,300,000 | 1 KB | 20 |
SQS-based S3 | aws:cloudtrail, aws:config | gz, json | 1,000,000 | 1 KB | 50 |
Memory usage benchmark for generic S3 inputs¶
Event size | Number of events per key | Total number of keys | Archive type | Number of inputs | Memory used |
---|---|---|---|---|---|
1,000 | 1,000 | 10,000 | zip | 20 | 20 G |
1,000 | 1,000 | 1,000 | zip | 20 | 12 G |
1,000 | 1,000 | 10,000 | zip | 10 | 18 G |
100 B | 1,000 | 10,000 | zip | 10 | 15 G |
If you do not achieve the expected AWS data ingestion throughput, see Troubleshoot the Splunk Add-on for AWS.
Billing¶
The following table provides general guidance on sizing, performance, and cost considerations for the Billing data input:
Consideration |
Notes |
---|---|
Sizing and performance |
Detailed billing reports can be very large in size, depending on
your environment. If you configure the add-on to collect detailed
reports, it collects all historical reports available in the bucket by
default. In addition, for each newly finalized monthly and detailed
report, the add-on collects new copies of the same report once per
interval until the etag is unchanged. |
AWS cost |
Billing reports themselves do not incur charges, but standard S3
charges apply. |
Billing Cost and Usage Report¶
The following table provides general guidance on sizing, performance, and cost considerations for the Billing Cost and Usage Report data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
Splunk Platform Environment | Architecture setup | Number of inputs | Event Count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~8mins | 12.13% | 0.20% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~11mins | 12.09% | 0.21% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~6mins | 12.01% | 0.20% |
CloudTrail¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudTrail data input:
Consideration |
Notes |
---|---|
Sizing and performance |
None. |
AWS cost |
Using CloudTrail itself does not incur charges, but standard S3,
SNS, and SQS charges apply. |
Config¶
The following table provides general guidance on sizing, performance, and cost considerations for the Config data input:
Consideration |
Notes |
---|---|
Sizing and performance |
None. |
AWS cost |
Using Config incurs charges from AWS. See http://aws.amazon.com/config/pricing/. |
Config Rules¶
The following table provides general guidance on sizing, performance, and cost considerations for the Config Rules data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | None. |
CloudWatch¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch data input:
Consideration |
Notes |
---|---|
Sizing and performance |
The smaller the granularity you configure, the more events you
collect. |
AWS cost |
Using CloudWatch and making requests against the CloudWatch API
incurs charges from AWS. |
CloudWatch Logs (VPC Flow Logs)¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch Logs (VPC Flow Logs) data input:
Consideration |
Notes |
---|---|
Sizing and performance |
AWS limits each account to 10 requests per second, each of which
returns no more than 1 MB of data. In other words, the data ingestion
and indexing rate is no more than 10 MB/s. The add-on modular input can
process up to 4,000 events per second in a single log stream.
|
AWS cost |
Using CloudWatch Logs incurs charges from AWS. See https://aws.amazon.com/cloudwatch/pricing/. |
CloudWatch Metrics¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch Metrics data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
The Number of API calls means m*n where m is the number of unique metric dimensions and n is the number of unique metric names.
Splunk Platform Environment | Architecture setup | Number of inputs | Number of API calls | Event Count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~28mins | 16.03% | 1.67% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~35mins | 14.89% | 1.83% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~32mins | 15.09% | 1.70% |
Incremental S3¶
The following table provides general guidance on sizing, performance, and cost considerations for the Incremental S3 data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
Splunk Platform Environment | Architecture setup | Number of inputs | Event count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~104mins | 0.05% | 0.10% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~105mins | 0.21% | 0.11% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~104mins | 0.02% | 0.11% |
Inspector¶
The following table provides general guidance on sizing, performance, and cost considerations for the Inspector data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using Amazon Inspector incurs charges from AWS. See https://aws.amazon.com/inspector/pricing/. |
Kinesis¶
The following table provides general guidance on sizing, performance, and cost considerations for the Kinesis data input:
Consideration | Notes |
---|---|
Sizing and performance | See Performance reference for the Kinesis input in the Splunk Add-on for AWS. |
AWS cost | Using Amazon Kinesis incurs charges from AWS. See https://aws.amazon.com/kinesis/streams/pricing/. |
S3¶
The following table provides general guidance on sizing, performance, and cost considerations for the S3 data input:
Consideration |
Notes |
---|---|
Sizing and performance |
AWS throttles S3 data collection at the bucket level, so expect
some delay before all data arrives in your Splunk platform. |
AWS cost |
Using S3 incurs charges from AWS. See https://aws.amazon.com/s3/pricing/. |
Security Lake¶
The following tables provide general guidance on sizing, performance, and cost considerations for the Amazon Security Lake data input. Files ranging in size from 20KB to 200MB were used to collect the performance stats.
Splunk Platform Environment |
Architecture setup |
Number of indexers |
Number of inputs |
Batch size |
Heavy forwarder/IDM CPU % |
Heavy forwarder/IDM RAM % |
Expected Average Throughput Indexed |
---|---|---|---|---|---|---|---|
Customer Managed Platform (CMP) |
|
N/A |
1 |
5 |
11.77% |
3.37% |
3.33 GB/h |
Splunk Cloud Classic |
|
3 |
1 |
5 |
99.90% |
22.06% |
2.82 GB/h |
Splunk Cloud Victoria |
|
3 |
1 |
5 |
54.28% |
22.78% |
2.58 GB/h |
Customer Managed Platform (CMP) |
|
N/A |
2 |
5 |
9.93% |
3.20% |
7.72 GB/h |
Splunk Cloud Classic |
|
6 |
2 |
5 |
99.95% |
22.71% |
5.60 GB/h |
Splunk Cloud Victoria |
|
6 |
2 |
5 |
55.68% |
24.13% |
5.28 GB/h |
Customer Managed Platform (CMP) |
|
N/A |
5 |
5 |
85.42% |
13.65% |
277 GB/h |
Splunk Cloud Classic |
|
9 |
5 |
5 |
99.95% |
27.29% |
96.93 GB/h |
Splunk Cloud Victoria |
|
9 |
5 |
5 |
66.45% |
21.18% |
214 GB/h |
Customer Managed Platform (CMP) |
|
N/A |
1 |
10 |
10.03% |
3.07% |
5.07 GB/h |
Splunk Cloud Classic |
|
3 |
1 |
10 |
99.95% |
23.20% |
5.32 GB/h |
Splunk Cloud Victoria |
|
3 |
1 |
10 |
54.69% |
21.14% |
5.22 GB/h |
Customer Managed Platform (CMP) |
|
N/A |
2 |
10 |
15.02% |
3.31% |
8.99 GB/h |
Splunk Cloud Classic |
|
6 |
2 |
10 |
99.95% |
25.89% |
10.78 GB/h |
Splunk Cloud Victoria |
|
6 |
2 |
10 |
57.58% |
20.83% |
9.58 GB/h |
Customer Managed Platform (CMP) |
|
N/A |
5 |
10 |
82.09% |
16% |
278 GB/h |
Splunk Cloud Classic |
|
9 |
5 |
10 |
99.93% |
22.96% |
100 GB/h |
Splunk Cloud Victoria |
|
9 |
5 |
10 |
61.59% |
19.63% |
325 GB/h |
Performance reference notes:
- The Amazon Security Lake data input is stateless, so multiple inputs can be configured against the same SQS queue.
- The following configuration settings are to scale:
- Batch Size: The number of threads spawned by a single input. For
example,
n=10
will process 10 messages in parallel). - Number of Amazon Security Lake inputs.
- Batch Size: The number of threads spawned by a single input. For
example,
- If you have horizontally scaled the SQS-based S3 input by configuring multiple inputs with the same SQS queue, and your file size in the S3 bucket is not consistent, then the best practice is to decrease batch size (minimum up to 1), as batches are processed sequentially.
Transit Gateway Flow Logs¶
The following tables provide general guidance on sizing, performance, and cost considerations for the Transit Gateway Flow Logs data input. Files with 1MB size were used to collect the performance stats. The batch size for all the inputs was 10.
Common architecture setup |
Number of indexers |
Number of inputs |
Heavy forwarder/IDM CPU % |
Heavy forwarder/IDM RAM % |
Expected Average Throughput Indexed |
---|---|---|---|---|---|
Customer Managed Platform (CMP)
|
N/A |
1 |
38.40% |
7.26% |
26578 KB/m |
N/A |
5 |
50.75% |
6.98% |
40116 KB/m |
|
Splunk platform environment - Victoria Search Head Cluster
|
3 |
1 |
24.34% |
9.20% |
40483 KB/m |
9 |
5 |
41.12% |
18.05% |
61498 KB/m |
|
Splunk platform environment - Classic Cluster (1 IDM)
|
3 |
1 |
22.37% |
7.52% |
45048 KB/m |
9 |
5 |
29.05% |
20.40% |
53792 KB/m |
SQS¶
The following table provides general guidance on sizing, performance, and cost considerations for the SQS data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using SQS incurs charges from AWS. See https://aws.amazon.com/sqs/pricing/. |
SNS¶
The following table provides general guidance on sizing, performance, and cost considerations for the SNS data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using SNS incurs charges from AWS. See https://aws.amazon.com/sns/pricing/. |
Deploy the Splunk Add-on for AWS¶
Complete the following steps to configure the Splunk Add-on for AWS:
- Install the Splunk Add-on for AWS.
- Manage accounts for the Splunk Add-on for AWS.
- Find your particular input(s) from the Input Configuration Details
section:
- Configure Billing inputs for the Splunk Add-on for AWS
- Configure Cost and Usage Report inputs for the Splunk Add-on for AWS
- Configure CloudTrail inputs for the Splunk Add-on for AWS
- Configure CloudWatch inputs for the Splunk Add-on for AWS
- Configure CloudWatch Log inputs for the Splunk Add-on for AWS
- Configure Config inputs for the Splunk Add-on for AWS
- Configure Config Rules inputs for the Splunk Add-on for AWS
- Configure Metadata inputs for the Splunk Add-on for AWS
- Configure Generic S3 inputs for the Splunk Add-on for AWS
- Configure Incremental S3 inputs for the Splunk Add-on for AWS
- Configure Inspector Classic inputs for the Splunk Add-on for AWS
- Configure Inspector v2 inputs for the Splunk Add-on for AWS
- Configure Kinesis inputs for the Splunk Add-on for AWS
- Configure Security Lake inputs for the Splunk Add-on for AWS
- Configure SQS inputs for the Splunk Add-on for AWS
- Configure SQS-based S3 inputs for the Splunk Add-on for AWS
- Configure miscellaneous inputs for the Splunk Add-on for AWS
- Configure Amazon Kinesis Firehose on a paid Splunk Cloud deployment
- Configure CloudTrail Lake inputs for the Splunk Add-on for AWS
Ended: Overview
Deployment ↵
Installation overview for the Splunk Add-on for AWS¶
- Download the Splunk Add-on for AWS from Splunkbase or Splunk Web.
- Use the tables in this topic to determine where to install this add-on.
- Perform any prerequisite steps specified in the tables before installing.
- Use the links in the Installation walkthrough section to perform the installation.
Distributed deployments¶
Use the following tables to install the Splunk Add-on for AWS in a deployment that uses forwarders to get data in, such as a distributed deployment. You might need to install the add-on in multiple places.
Where to install this add-on¶
Unless otherwise noted, you can safely install all supported add-ons to all tiers of a distributed Splunk platform deployment. See Where to install Splunk add-ons in Splunk Add-ons for more information.
This table provides a reference for installing this specific add-on to a distributed deployment of the Splunk platform:
Splunk platform component | Supported | Required | Comments |
---|---|---|---|
Search heads | Yes | Yes | Data inputs for this add-on require large amounts of memory. See Hardware and software requirements for the Splunk Add-on for AWS. |
Indexers | Yes | Conditional | Not required when the parsing operations occur on the heavy forwarders. When using an HTTP Event Collector (HEC) token, installation is required on indexers. |
Heavy forwarders | Yes | Yes | This add-on requires heavy forwarders to perform data collection through modular inputs and to perform the setup and authentication with AWS in Splunk Web. |
Universal forwarders | No | No | This add-on requires heavy forwarders. |
Distributed deployment compatibility¶
This table provides a quick reference for the compatibility of this add-on with Splunk distributed deployment features:
Distributed deployment feature |
Supported |
Comments |
---|---|---|
Search head clusters |
Yes |
You can install this add-on on a search head cluster for all
search-time functionality, but configure inputs on forwarders to avoid
duplicate data collection.
|
Indexer clusters |
Yes |
Before installing this add-on to a cluster, make the following changes to the add-on package:
|
Deployment server |
No |
Deployment servers support deploying unconfigured add-ons only.
|
Installation walkthroughes¶
See the following links, or About installing Splunk add-ons in the Splunk Add-Ons manual, for an installation walkthrough specific to your deployment scenario:
- Install the Splunk Add-on for AWS in a Splunk Cloud deployment
- Install the Splunk Add-on for AWS in a single-instance Splunk Enterprise deployment
- Install the Splunk Add-on for AWS in a distributed Splunk Enterprise deployment
Configure Add-on Configurations & Accounts with Command Line Utility¶
The Splunk Add-on for AWS is shipped with the Command Line Utility which enables users to configure accounts, IAM roles and inputs in bulk.
For step-by-step instructions on how to use the utility, see the
README.md file located at:
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/bin/tools/configure/README.md
Install the Splunk Add-on for AWS in a Splunk Cloud Deployment¶
Install the Splunk Add-on for Amazon Web Services (AWS) to your free trial instance of Splunk Cloud using the app browser in Splunk Cloud:
- From the Splunk Web home screen, click on the gear icon next to Apps in the navigation bar.
- Click Browse more apps.
- Find the Splunk Add-on for AWS, then click Install.
- Follow the on-screen prompts to complete your installation.
- Install the Splunk Add-on for AWS to an Inputs Data Manager. Request that Splunk Cloud Support installs the Splunk add-on for AWS on your Splunk Cloud instance.
Install the Splunk Add-on for AWS in a single-instance Splunk Enterprise deployment¶
Follow these steps to install the Splunk add-on for Amazon Web Services (AWS) in a single-instance deployment:
- From the Splunk Web home screen, click the gear icon next to Apps in the navigation bar.
- Click Install app from file.
- Locate the downloaded file and click Upload.
- If Splunk Enterprise prompts you to restart, do so.
- Verify that the add-on appears in the list of apps and add-ons. You can
also find it on the server at
$SPLUNK_HOME/etc/apps/Splunk_TA_AWS
.
Install the Splunk Add-on for AWS in a distributed Splunk Enterprise deployment¶
If you are using a distributed Splunk Enterprise deployment, follow the instructions in each of the following sections to deploy the Splunk Add-on for Amazon Web Services (AWS) to your search heads, indexers, and forwarders. You must install the Splunk Add-on for AWS on a heavy forwarder. You cannot use this add-on with a universal forwarder. You can install this add-on onto search heads and indexers.
Heavy forwarders¶
To install the Splunk Add-on for AWS to a heavy forwarder, follow these steps:
- Download the Splunk Add-on for AWS from Splunkbase, if you have not already done so.
- From the Splunk Web home screen on your heavy forwarder, click the gear icon next to Apps.
- Click Install app from file.
- Locate the downloaded file and click Upload.
- If the forwarder prompts you to restart, do so.
- Verify that the add-on appears in the list of apps and add-ons. You
can also find it on the server at
$SPLUNK_HOME/etc/apps/Splunk_TA_AWS
.
Search heads¶
To install the Splunk Add-on for AWS to a search head, follow these steps:
- Download the Splunk Add-on for AWS from Splunkbase, if you have not already done so.
- From the Splunk Web home screen, click the gear icon next to Apps.
- Click Install app from file.
- Locate the downloaded file and click Upload.
- If Splunk Enterprise prompts you to restart, do so.
- Verify that the add-on appears in the list of apps and add-ons.
Make sure the add-on is not visible. If the Visible column for the add-on is set to Yes, edit the properties and change the visibility to No. Disable visibility of add-ons on search heads to avoid inputs from being created on search heads. Data collection for search heads might conflict with users’ search activity.
You can also find the add-on on the server at
$SPLUNK_HOME/etc/apps/Splunk_TA_AWS
.
Search head clusters¶
Before deploying the Splunk Add-on for AWS to a search head cluster, make the following changes to the add-on package:
- Remove the inputs.conf and inputs.conf.spec files. If you are collecting data locally from the machines running your search head nodes, keep these files.
- Use the deployer to deploy an add-on to the search head cluster members.
See Use the deployer to distribute apps and configuration updates in the Splunk Enterprise Distributed Search manual.
Indexers¶
To install the Splunk Add-on for AWS to an indexer, follow these steps:
- Download the Splunk Add-on for AWS from Splunkbase, if you have not already done so.
- Unpack the .tgz package.
- Place the resulting
Splunk_TA_AWS
folder in the$SPLUNK_HOME/etc/apps
directory on your indexer. - Restart the indexer.
Indexer clusters¶
- Remove the inputs.conf and inputs.conf.spec files. If you are collecting data locally from the machines running your indexer nodes, keep these files.
- Deploy add-ons to peer nodes on indexer clusters using a master node.
For more information about using a master node to deploy to peer nodes of an indexer cluster, see Manage app deployment across all peers in Managing Indexers and Clusters of Indexers.
Upgrade the Splunk Add-on for AWS¶
Upgrade to the latest version of the Splunk Add-on for Amazon Web Services (AWS). Upgrades to version 5.2.0 and later are possible only from version 5.0.3 or later. For upgrading the Splunk Add-on for AWS on Splunk Cloud deployments, contact your Splunk Cloud administrator.
Upgrade prerequisites¶
The following table displays the version where the prerequisite was introduced, and a description for each prerequisite.
Minimum Version |
Prerequisite description |
---|---|
7.3.0 |
Starting in version 7.3.0 of the Splunk Add-on for AWS, the checkpoint mechanism was migrated to the Splunk KV store for the Inspector, InspectorV2, Config Rules, Cloudwatch Logs and Kinesis inputs. Disable all the Inspector, InspectorV2, Config Rules and Cloudwatch Logs inputs before you upgrade the add-on to version 7.3.0. This is not applicable to the Kinesis input. |
7.1.0 |
Starting in version 7.1.0 of the Splunk Add-on for AWS, the checkpoint mechanism was migrated to the Splunk KV store for the Billing Cost and Usage Report, Cloudwatch Metrics, and Incremental S3 inputs. Disable all the Billing Cost and Usage Report, CloudWatch metrics, and Incremental S3 inputs before you upgrade the add-on to version 7.1.0. Otherwise, you might see errors in the log files, resulting in data loss/duplication against your already configured inputs. |
7.0.0 |
If you are using SQS-based S3 inputs and your add-on version is
7.0.0 or higher, then make sure the
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment. If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues. |
6.3.0 |
Starting in version 6.3.0 of the Splunk Add-on for AWS, the VPC
Flow log extraction format has been updated to include v3-v5 fields.
Before upgrading to versions 6.3.0 and higher of the Splunk Add-on for
AWS, Splunk platform deployments ingesting AWS VPC Flow Logs must update
the log format in AWS VPC to include v3-v5 fields in order to ensure
successful field extractions. |
6.2.0 |
Starting in version 6.2.0 of the Splunk Add-on for AWS, the
Description input is deprecated. The best practice is to use the
Metadata. |
6.0.0 |
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of
all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose.
This means you can configure the Splunk Add-on for AWS to ingest across
all AWS data sources for ingesting AWS data into Splunk. |
Upgrade steps¶
- Verify that you are running version 8.0.0 or later of the Splunk platform.
- (Optional) Plan your Splunk Enterprise upgrade to work with the Python 3 migration.
- Disable all running inputs.
- Disable or delete the running inputs for Description Input, if configured.
- Delete the pycache directory found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/pycache
. - (Optional) If you use both the Splunk Add-on for Amazon Kinesis
Firehose and the Splunk Add-on for AWS on the same Splunk
instance, then you must uninstall the Splunk Add-on for Amazon
Kinesis Firehose, including removal of the existing
Splunk_TA_aws-kinesis-firehose
folder from all applicable$SPLUNK_HOME
app directories, after upgrading the Splunk Add-on for AWS to version 6.0.0 or later. This is in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 or later of the Splunk Add-on for AWS. - (Optional) Upgrade to version 5.0.3 of the Splunk Add-on for AWS, if you have not done so already.
- Download the latest version of the Splunk Add-on for AWS from Splunkbase.
- Install the latest version of the Splunk Add-on for AWS.
- If any Description input was created using an earlier version of the add-on, create a new Metadata input as a replacement for it.
- If your inputs were configured using a version of this add-on
earlier than 5.1.0, Reformat the queue URL for all SQS-based s3
inputs to use regional endpoints:
- Navigate to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/
, and open theinputs.conf
file using a text editor. - Navigate to the
[aws_sqs_based_s3://<input_name>]
stanza, and reformat the queue URL for all SQS-based s3 inputs using the following new url format: Old URL format:
https://<aws_region>.queue.amazonaws.com/<account_id>/<queue_name>
New URL format:https://sqs.<aws_region>.amazonaws.com/<account_id>/<queue_name>
- Save your changes.
- Navigate to
- Restart your Splunk platform deployment.
- Enable all inputs.
Manage accounts for the Splunk Add-on for AWS¶
The Splunk Add-on for Amazon Web Services (AWS) can only access the data in your AWS account if your account has an IAM AWS Identity Account Management (IAM) role. Read the following sections to do the following:
- Create an IAM role and assign it to your AWS account.
- Find an IAM role within your Splunk platform instance.
Before you can configure Splunk Cloud or Splunk Enterprise to work with your AWS data, you must set up accounts in Amazon Web Services.
Create an IAM role and assign it to your AWS account¶
To configure AWS accounts and permissions, you must have administrator rights in the AWS Management Console. If you do not have administrator access, work with your AWS admin to set up the accounts with the required permissions.
- To let the Splunk Add-on for Amazon Web Services access the data in your AWS account, you assign an IAM role to one or more AWS accounts. You then grant those roles the permissions that are required by the AWS account.
- If you run this add-on on a Splunk platform instance in your own managed Amazon Elastic Compute Cloud (EC2), then assign that EC2 to a role and give that role the IAM permissions listed here.
Manage IAM policies¶
There are three ways to manage policies for your IAM roles:
- Use the AWS Policy Generator tool to collect all permissions into one centrally managed policy. You can apply the policy to the IAM group that is used by the user accounts or the EC2s that the Splunk Add-on for AWS uses to connect to your AWS environment.
- Create multiple different users, groups, and roles with permissions specific to the services from which you plan to collect data.
- Copy and paste the sample policies provided on this page and apply them to an IAM Group as custom inline policies. To further specify the resources to which the policy grants access, replace the wildcards with the exact Amazon Resource Names (ARNs) of the resources in your environment.
For more information about working with inline policies, see Managing IAM Policies in the AWS documentation.
Create and configure roles to delegate permissions to IAM users¶
The Splunk Add-on for AWS supports the AWS Security Token Service (AWS STS) AssumeRole API action that lets you use IAM roles to delegate permissions to IAM users to access AWS resources.
The AssumeRole API returns a set of temporary security credentials consisting of an access key ID, a secret access key, and a security token that an AWS account can use to access AWS resources that it might not normally have access to.
To assume a role, your AWS account must be trusted by that role. The trust relationship is defined in the role’s trust policy when that role is created. That trust policy states which user accounts are allowed to delegate access to this account’s role.
The user who wants to access the role must also have permissions delegated from the role’s administrator. If the user is in a different account than the role, then the user’s administrator must attach a policy that allows the user to call AssumeRole on the ARN of the role in the other account. If the user is in the same account as the role, then you can either attach a policy to the user identical to the previous different account user, or you can add the user as a principal directly in the role’s trust policy.
To create an IAM role, see Creating a Role to Delegate Permissions to an IAM User in the AWS documentation.
After creating the role, use the AWS Management Console to modify the trust relationship to allow the IAM user to assume the newly created role. The following example shows a trust relationship that allows a role to be assumed by an IAM user named johndoe:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/johndoe"
},
"Action": "sts:AssumeRole"
}
]
}
Next, grant your IAM user permission to assume the role. The following example shows an AWS IAM policy that allows an IAM user to assume the s3admin role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012:role/s3admin"
}
]
}
Find an IAM role within your Splunk platform instance¶
Collecting data using an auto-discovered EC2 IAM role is not supported in the AWS China region.
- Follow IAM roles for Amazon EC2 in the AWS documentation to set up an IAM role for your EC2.
- Ensure that this role has adequate permissions. If you do not give this role all of the permissions required for all inputs, configure AWS accounts specific to inputs not covered by the permissions for this role.
- On the Splunk Web home page, click Splunk Add-on for AWS in the left navigation bar.
- Click Configuration in the app navigation bar. By default, the add-on displays the Account tab.
- Look for the EC2 IAM role in the Autodiscovered IAM Role column. If you are in your own managed AWS environment and have an EC2 IAM role configured, it appears in this account list automatically.
You can also configure AWS accounts if you want to use both EC2 IAM roles and user accounts to ingest your AWS data.
You cannot edit or delete EC2 IAM roles from the add-on.
Add and manage AWS accounts¶
Perform the following steps to add an AWS account:
- In the Splunk Web home page, click Splunk Add-on for AWS in the left navigation bar.
- Click Configuration in the app navigation bar. The add-on displays the Account tab.
- Click Add.
- Name the AWS account. You cannot change this name once you configure the account.
- Enter the Key ID and Secret Key credentials for the AWS account that the Splunk platform uses to access your AWS data. The accounts that you configure must have the necessary permissions to access the AWS data that you want to collect.
- Select the Region Category for the account. The most common category is **
- Click Add.
Edit existing accounts by clicking Edit in the Actions column.
Delete an existing account by clicking Delete in the Actions column. You cannot delete accounts that are associated with any inputs, even if those inputs are disabled. To delete an account, delete the inputs or edit them to use a different account and then delete the account.
To use custom commands and alert actions, you must set up at least one AWS account on your Splunk platform deployment search head or search head cluster.
Add and manage private AWS accounts¶
Private account configurations are for users who want to use regional/private endpoints for account validation.
Perform the following steps to add a private AWS account:
- In the Splunk Web home page, click Splunk Add-on for AWS in the left navigation bar.
- Click Configuration in the app navigation bar. The add-on displays the Account tab.
- Click the Private Account tab.
- Click Add.
- Name the AWS private account. You cannot change this name once you configure the account.
- Enter the Key ID and Secret Key credentials for the AWS account that the Splunk platform uses to access your AWS data. The accounts that you configure must have the necessary permissions to access the AWS data that you want to collect.
- Select the Region Category for the private account. The most common category is Global.
- Select the Region which will be used for regional endpoints to authenticate account credentials.
- (Optional) To use private endpoints for account validation, click the Use Private Endpoints checkbox and enter the private endpoint URL of your AWS Security Token Service (STS). This step is only required if you have specific requirements for your private endpoints.
- Click Add.
Edit existing private accounts by clicking Edit in the Actions column of the Private Account tab.
Delete an existing private account by clicking Delete in the Actions column. You cannot delete private accounts that are associated with any inputs, even if those inputs are disabled. To delete a private account, delete the inputs or edit them to use a different account or private account and then delete the private account.
Add and manage IAM roles¶
Use the Configuration menu in the Splunk Add-on for AWS to manage AWS IAM roles that can be assumed by IAM users. Adding IAM roles lets the Splunk Add-on for AWS access the following AWS resources:
- Generic S3
- Incremental S3
- SQS-based S3
- Billing
- Metadata
- CloudWatch
- Kinesis
Add an IAM role¶
Use the following steps to add an IAM role:
- On the Splunk Web home page, click Splunk Add-on for AWS in the left navigation bar.
- Click Configuration in the app navigation bar, and then click the IAM Role tab.
- Click Add.
- In the Name field, name the role to be assumed by authorized AWS accounts managed on the Splunk platform. You cannot change the name once you configure the role.
- In the ARN field, enter the role’s Amazon Resource Name in the valid
format:
arn:aws:iam::<aws_resource_id>:role/<role_name>
. - Click Add.
Click Edit in the Actions column to edit existing IAM roles.
Click Delete in the Actions column to delete an existing role. You cannot delete roles associated with any inputs, even if those inputs are disabled. To delete an account, delete the inputs or edit them to use a different assumed role and then delete the role.
Configure a proxy connection¶
- On the Splunk Web home page, click Splunk Add-on for AWS in the left navigation bar.
- Click Configuration in the app navigation bar.
- Click the Proxy tab.
- Select the Enable box to enable the proxy connection and fill in the fields required for your proxy.
- Click Save.
To disable your proxy but save your configuration, uncheck the Enable box. The add-on stores your proxy configuration so that you can enable it later.
To delete your proxy configuration, delete the values in the fields.
Ended: Deployment
Pull-based (API) input configurations ↵
Configure inputs for the Splunk Add-on for AWS¶
Configure inputs for the Splunk Add-on for AWS.
Input configuration overview¶
You can use the Splunk Add-on for AWS to collect data from AWS. For each supported data type, one or more input types are provided for data collection.
Follow these steps to plan and perform your AWS input configuration:
Users adding new inputs must have the admin_all_objects
role enabled.
- Click input type to go to the input configuration details.
- Follow the steps described in the input configuration details to complete the configuration.
Supported data types and corresponding AWS input types¶
The following matrix lists all the data types that can be collected using the Splunk Add-on for AWS and the corresponding input types that you can configure to collect this data.
For some data types, the Splunk Add-on for AWS provides you with the flexibility to choose from multiple input types based on specific requirements. For example, collect historical logs as opposed to only collect newly created logs. SQS-based S3 is the best practice input type to use for all of its collectible data types.
Data Type | Source type | Supported Input Types | Best practice Input Type |
---|---|---|---|
Billing | aws:billing |
Billing | Billing |
CloudWatch | aws:cloudwatch |
CloudWatch | CloudWatch |
CloudFront Access Logs | aws:cloudfront:accesslogs |
Generic S3 Incremental S3 SQS-based S3 |
SQS-based S3 |
Config | aws:config , aws:config:notification |
SQS-based S3 AWS Config |
SQS-based S3 |
Config Rules | aws:config:rule |
Config Rules | Config Rules |
Description | aws:description |
Description | Description |
ELB Access Logs | aws:elb:accesslogs |
SQS-based S3 Generic S3 Incremental S3 |
SQS-based S3 |
Inspector | aws:inspector |
Inspector | Inspector |
CloudTrail | aws:cloudtrail |
SQS-based S3 Generic S3 Incremental S3 |
SQS-based S3 |
S3 Access Logs | aws:s3:accesslogs |
SQS-based S3 Generic S3 Incremental S3 |
SQS-based S3 |
VPC Flow Logs | aws:cloudwatchlogs:vpcflow aws:cloudwatchlogs:vpcflow:metric |
SQS-based S3 CloudWatch Logs Kinesis |
SQS-based S3 |
SQS | aws:sqs |
SQS | SQS |
Others | Custom sourcetypes | SQS-based S3 Generic S3 CloudWatch Logs Kinesis SQS |
SQS-based S3 |
AWS input types¶
The Splunk Add-on for AWS provides two categories of input types to gather useful data from your AWS environment:
- Dedicated, or single-purpose input types. Designed to ingest one specific data type
- Multi-purpose input types to collect multiple data types from the S3 bucket
Some data types can be ingested using either a dedicated input type or a multi-purpose input type. For example, CloudTrail logs can be collected using any of the following input types: CloudTrail, S3, or SQS-based S3. The SQS-based S3 input type is the recommended option because it is more scalable and provides higher ingestion performance.
Dedicated input types¶
To ingest a specific type of log, configure the corresponding dedicated input designed to collect the log type. Click the input type name in the following table for instructions on how to configure it.
Input | Description |
---|---|
AWS Config | Configuration snapshots, historical configuration data, and change notifications from the AWS Config service. |
Config Rules | Compliance details, compliance summary, and evaluation status of your AWS Config Rules. |
Inspector | Assessment Runs and Findings data from the Amazon Inspector service. |
CloudTrail | AWS API call history from the AWS CloudTrail service. |
CloudWatch Logs | Logs from the CloudWatch Logs service, including VPC Flow Logs. VPC Flow Logs allow you to capture IP traffic flow data for the network interfaces in your resources. |
CloudWatch | Performance and billing metrics from the AWS CloudWatch service. |
Description | Metadata about your AWS environment. |
Billing | Billing data from the billing reports that you collect in the Billing & Cost Management console. |
Kinesis | Data from your Kinesis streams. Note:It is a best practice to collect VPC flow logs and CloudWatch logs through Kinesis streams. However, the AWS Kinesis input has the following limitations:
|
SQS | Data from your AWS SQS. |
Multi-purpose input types¶
Configure multi-purpose inputs to ingest supported log types.
Use the SQS-based input type to collect its supported log types. If you are already collecting logs using generic S3 inputs, you can still create SQS-based inputs and migrate your existing generic S3 inputs to the new inputs. For detailed migration steps, see Migrate from the S3 input to the SQS-based input in this manual.
If the log types you want to collect are not supported by the SQS-based input type, use the generic S3 input type instead.
Read the multi-purpose input types comparison table to view the differences between the multi-purpose S3 collection input types.
Click the input type name in the table below for instructions on how to configure it.
Input | Description |
---|---|
SQS-based S3 (best practice) | A more scalable and higher-performing alternative to the generic and incremental S3 inputs, the SQS-based S3 input polls messages from SQS that subscribes to SNS notification events from AWS services and collects the corresponding log files - generic log data, CloudTrail API call history, Config logs, and access logs - from your S3 buckets in real time. Unlike the other S3 input types, the SQS-based S3 input type takes advantage of the SQS visibility timeout setting and enables you to configure multiple inputs to scale out data collection from the same folder in an S3 bucket without ingesting duplicate data. Also, the SQS-based S3 input automatically switches to multipart, in-parallel transfers when a file is over a specific size threshold, thus preventing timeout errors caused by large file size. |
Generic S3 | General-purpose input type that can collect any log type from S3 buckets: CloudTrail API call history, access logs, and even custom non-AWS logs. The generic S3 input lists all the objects in the bucket and examines the modified date of each file every time it runs to pull uncollected data from an S3 bucket. When the number of objects in a bucket is large, this can be a very time-consuming process with low throughput. |
Incremental S3 | The incremental S3 input type collects four AWS service log types. There are four types of logs you can collect using the Incremental S3 input:
|
Multi-purpose input types comparison table¶
Generic S3 | Incremental S3 | SQS-based S3 (best practice) | |
---|---|---|---|
Supported log types | Any log type, including non-AWS custom logs. | 4 AWS services log types: CloudTrail logs, S3 access logs, CloudFront access logs, ELB access logs. | 5 AWS services log types (Config logs, CloudTrail logs, S3 access logs, CloudFront access logs, ELB access logs), as well as non-AWS custom logs. |
Data collection method | Lists all objects in the bucket and compares modified date against the checkpoint. | Directly retrieves AWS log files whose filenames are distinguished by datetime. | Decodes SQS messages and ingests corresponding logs from the S3 bucket. |
Ingestion performance | Low | High | High |
Can ingest historical logs (logs generated in the past)? | Yes | Yes | No |
Scalable? | No | No | Yes You can scale out data collection by configuring multiple inputs to ingest logs from the same S3 bucket without creating duplicate data |
Fault-tolerant? | No Each generic S3 input is a single point of failure. | No Each incremental S3 input is a single point of failure. | Yes Takes advantage of the SQS visibility timeout setting. Any SQS message not successfully processed in time by the SQS-based S3 input will reappear in the queue and will be retrieved and processed again. In addition, data collection can be horizontally scaled out so that if one SQS-based S3 input fails, other inputs can still continue to pick up messages from the SQS queue and ingest corresponding data from the S3 bucket. |
Configure Billing inputs for the Splunk Add-on for AWS¶
The Billing (Legacy) input has been deprecated in version 7.6.0 of the Splunk Add-on for AWS. To collect the billing data, configure the Billing (Cost and Usage Report) input.
For more information, see the Cost and Usage Report inputs manual.
Configure Cost and Usage Report inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Cost and Usage Report inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Cost and Usage Report input.
- Configure AWS permissions for the Cost and Usage Report input.
- (Optional) Configure VPC Interface Endpoints for STS and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure Cost and Usage Report inputs either through Splunk Web or configuration files.
Enable prefixes so that AWS delivers the reports into a folder with the name of the prefix. Timestamps and report names can be used to filter results if you do not want to ingest all the reports.
After you configure your Cost and Usage Report inputs, see Access billing data for the Splunk Add-on for AWS for more information about data collection behavior and how to access the preconfigured reports included in the add-on.
See the Cost and Usage Report section of the AWS documentation for more information on AWS-side configuration steps.
Configure AWS permissions for the Cost and Usage Report input¶
You need these required permissions for the S3 bucket to collect your Cost and Usage Reports:
Get*
List*
In the Resource section of the policy, specify the Amazon Resource Names (ARNs) of the S3 buckets that contain billing reports for your accounts. ListAllMyBuckets is required when you use an asterisk (*) character.
See the following sample inline policy to configure Billing input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
],
"Resource": ""arn:aws:s3:::<your bucket name>"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-permissions-ref.html.
Configure a Cost and Usage Report input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Billing > Billing (Cost and Usage Report).
-
Fill out the fields as described in the following table: -
Argument in configuration file Field in Splunk Web Description AWS input configuration aws_account
AWS Account The AWS account or EC2 IAM role the Splunk platform uses to access your Billing data. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. aws_iam_role
Assume Role The IAM role to assume. Verify that your IAMAssume role has enough permission to access your S3 buckets. For more information, see Add and manage IAM roles in the Manage accounts for the Splunk Add-on for AWS topic. aws_s3_region
AWS Region (Optional) The AWS region that contains your bucket. In inputs.conf, enter the region ID.
Provide an AWS Region only if you want to use specific regional endpoints instead of public endpoints for data collection.
See the AWS service endpoints topic in the AWS General Reference manual for more information.s3_private_endpoint_url
Private Endpoint (S3) Private Endpoint (Interface VPC Endpoint) of your S3 service, which can be configured from your AWS console.
Supported Formats:<http/https>://bucket.vpce-<endpoint_id>-<unique_id>.s3.<region_id>.vpce.amazonaws.com
<http/https>://bucket.vpce-<endpoint_id>-<unique_id>-<availability_zone>.s3.<region_id>.vpce.amazonaws.com
sts_private_endpoint_url
Private Endpoint (STS) Private Endpoint (Interface VPC Endpoint) of your STS service, which can be configured from your AWS console.
Supported Formats:<http/https>://vpce-<endpoint_id>-<unique_id>.sts.<region_id>.vpce.amazonaws.com
<http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sts.<region_id>.vpce.amazonaws.com
bucket_name
S3 Bucket The S3 bucket that is configured to hold Billing Reports. private_endpoint_enabled
Use Private Endpoints Check the checkbox to use private endpoints of AWS Security Token Service (STS) and AWS Simple Cloud Storage (S3) services for authentication and data collection. In inputs.conf, enter 0
or1
to respectively disable or enable use of private endpoints.report_prefix
Report Prefix Prefixes used to allow AWS to deliver the reports into a specified folder. report_names
Report Name Pattern A regular expression used to filter reports by name. Splunk-related configuration start_date
Start Date This add-on starts to collect data later than this time. If youleave this field empty, the default value is 90 days before the input isconfigured.Once the input is created, you cannot change its value. sourcetype
Source type A source type for the events. Specify a value if you want to override the default of aws:billing
. Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well.index
Index The index name where the Splunk platform puts the billing data. The default is main. Advanced settings interval
Interval Enter the number of seconds to wait before the Splunk platform runs the command again, or enter a valid cron schedule. The default is86,400 seconds (one day). This interval applies differently for Monthly report types and Detailed report types. For Monthly report types, the interval indicates how often to run the data collection for the current month’s monthly report and how often to check the previous month’s monthly report’s etag to determine if changes were made. If the etag does not match an already-downloaded version of the monthly report, it downloads that report to get the latest data. For Detailed report types, the interval indicates how often to check the previous month’s detailed report etag to determine if changes were made. If the etag does not match a report already downloaded, it downloads that report to get the latest data. The present month is never collected until the month has ended. Because AWS Billing Reports are usually not finalized until several days after the last day of the month, you can use the cron expression 0 0 8-31 * *
to skip data collection for the first seven days of every month to avoid collecting multiple copies of not-yet-finalized reports for the just-finished month.temp_folder
Temp folder Full path to a non-default folder with sufficient space for temporarily storing downloaded detailed billing report .zip files. Take into account the estimated size of uncompressed detailed billing report files, which can be much larger than that of zipped files. If you do not specify a temp folder, the add-on will use the system temp folder by default.
Configure a Cost and Usage Report input using configuration files¶
To configure inputs in inputs.conf, create a stanza using the following
template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_billing_cur://<name>]
start_by_shell = true
aws_account = <value>
aws_iam_role = <value>
aws_s3_region = <value>
bucket_name = <value>
bucket_region = <value>
private_endpoint_enabled = <value>
report_names = <value>
report_prefix = <value>
s3_private_endpoint_url = <value>
start_date = <value>
sts_private_endpoint_url = <value>
temp_folder = <value>
host_name = s3.amazonaws.com
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_billing_cur]
start_by_shell = false
aws_account = <value>
aws_iam_role = <value>
bucket_name = <value>
bucket_region = <value>
report_names = <value>
report_prefix = <value>
start_date = <value>
temp_folder = <value>
The previous values correspond to the default values in Splunk Web. If
you choose to copy this stanza to /local
and use it as a starting
point to configure your inputs.conf manually, change the stanza title
from aws_billing:cur
to aws_billing:cur://<name>
.
Configure Config inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Config inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Config input.
- Configure AWS permissions for the Config input.
- Configure Config inputs either through Splunk Web or configuration files.
Best practices for configuring inputs¶
- Configure Simple Queue Service (SQS)-based S3 inputs to collect AWS data.
- Configure an AWS Config input for the Splunk Add-on for Amazon Web Services on your data collection node through Splunk Web. This data source is available only in a subset of AWS regions, which does not include China. See the AWS service endpoints for a full list of supported regions.
- Configure a single enabled Config modular input for each unique SQS. Multiple enabled modular inputs can cause conflicts when trying to delete SQS messages or S3 records that another modular input is attempting to access and parse.
- Disable or delete testing configurations before releasing your configuration in production.
Supported Config input message types¶
The following message types are supported by the Config inputs for the Splunk Add-on for AWS:
ConfigurationHistoryDeliveryCompleted
ConfigurationSnapshotDeliveryCompleted
ConfigurationItemChangeNotification
OversizedConfigurationItemChangeNotification
Configure AWS permissions for the Config input¶
Set the following permissions in your AWS configuration:
- For the S3 bucket that collects your Config logs:
GetObject
GetBucketLocation
ListBucket
ListAllMyBuckets
- For the SQS subscribed to the SNS Topic that collects Config
notifications:
GetQueueAttributes
ListQueues
ReceiveMessage
GetQueueUrl
SendMessage
DeleteMessage
- For the Config snapshots:
DeliverConfigSnapshot
- For the IAM user to get the Config snapshots:
GetUser
See the following sample inline policy to configure Config input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:GetQueueAttributes",
"sqs:SendMessage",
"sqs:GetQueueUrl",
"sqs:DeleteMessage"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"config:DeliverConfigSnapshot"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:GetUser"
],
"Resource": [
"*"
]
}
]
}
For more information and sample policies, see the following AWS documentation:
- For SQS, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/UsingIAM.html.
- For S3, see http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html.
Configure a Config input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Config > Config.
- Fill out the fields as described in the table:
Field in Splunk Web |
Description |
---|---|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to access your Config data. In Splunk Web, select an account from the drop-down list. |
AWS Region |
The AWS region that contains the log notification SQS queue. Enter the region ID. See http://docs.aws.amazon.com/general/latest/gr/rande.html#d0e371. |
SQS queue name |
The name of the queue to which AWS sends new Config
notifications. Select a queue from the drop-down list, or enter the
queue name manually. The queue name is the final segment of the full
queue URL. For example, if your SQS queue URL is
|
Source type |
A source type for the events. Enter a value only if you want to
override the default of
|
Index |
The index name where the Splunk platform puts the Config data. The default is main. |
Interval |
The number of seconds to wait before the Splunk platform runs the command again. The default is 30 seconds. |
Configure a Config input using configuration files¶
To configure inputs manually in inputs.conf, create a stanza using the
following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_config://<name>]
aws_account = <value>
aws_region = <value>
sqs_queue = <value>
interval = <value>
sourcetype = <value>
index = <value>
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_config]
aws_account =
sourcetype = aws:config
queueSize = 128KB
persistentQueueSize = 24MB
interval = 30
The previous values correspond to the default values in Splunk Web as
well as some internal values that are not exposed in Splunk Web for
configuration. If you choose to copy this stanza to /local
and use it
as a starting point to configure your inputs.conf manually, change the
stanza title from aws_config
to aws_config://<name>
and add the
additional parameters that you need.
Config Rules¶
Configure Config Rules inputs for the Splunk Add-on for AWS
Complete the steps to configure Config Rules inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Config Rules input.
- Configure AWS permissions for the Config Rules input.
- Configure Config Rules inputs either through Splunk Web or configuration files.
Configure AWS services for the Config Rules input¶
- Enable AWS Config for all regions for which you want to collect data in the add-on. Follow the steps in the AWS documentation. See http://docs.aws.amazon.com/config/latest/developerguide/setting-up.html.
- Set up AWS Config Rules by following the instructions in the AWS Config documentation. See http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_set-up.html.
- Grant the necessary permissions to the AWS account used for this input.
Configure AWS permissions for the Config Rules input¶
You need these required permissions for Config:
DescribeConfigRules
DescribeConfigRuleEvaluationStatus
GetComplianceDetailsByConfigRule
GetComplianceSummaryByConfigRule
See the following sample inline policy to configure Config Rules input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"config:DescribeConfigRules",
"config:DescribeConfigRuleEvaluationStatus",
"config:GetComplianceDetailsByConfigRule",
"config:GetComplianceSummaryByConfigRule"
],
"Resource": "*"
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/config/latest/developerguide/example-policies.html
When you create a config rule, do not select more than 25 config rules per region. Selecting more than 25 config rules during input configuration will result in no data collection.
Configure a Config Rules input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the left navigation bar on Splunk Web home.
- Click Create New Input > Config Rules.
- Fill out the fields as described in the table:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
aws_account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your Config Rules data. In Splunk Web, select an account from the drop-down list. |
region |
Region | The AWS region that contains the Config Rules. See the AWS documentation for more information. |
rule_names |
Config Rules | Config Rules names in a comma-separated list. Leave blank to collect all rules. |
sourcetype |
Source Type | A source type for the events. Enter a value only if you want to override the default of aws:config:rule . Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
index |
Index | The index name where the Splunk platform puts the Config Rules data. The default is main. |
polling_interval |
Polling Interval | The data collection interval, in seconds. The default is 300 seconds. |
Configure a Config Rules input using configuration files¶
To configure the input using configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_config_rule_tasks.conf
using the following template:
[<name>]
account = <value>
region = <value>
rule_names = <value>
sourcetype = <value>
polling_interval = <value>
index = <value>
Here is an example stanza that collects Config Rules data for just two rules:
[splunkapp2:us-east-1]
account = splunkapp2
region = us-east-1
rule_names=required-tags,restricted-common-ports
sourcetype = aws:config:rule
polling_interval = 300
index = aws
Configure CloudTrail inputs for the Splunk Add-on for AWS¶
Complete the steps to configure CloudTrail inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the CloudTrail input.
- Configure AWS permissions for the CloudTrail input.
- (Optional) Configure VPC Interface Endpoints for SQS, STS and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure CloudTrail inputs either through Splunk Web or configuration files.
The CloudTrail input type supports the collection of CloudTrail data
(source type: aws:cloudtrail
). However, you might want to configure
SQS-based S3 inputs to collect this type of data. See
Configure SQS-based S3 inputs for the Splunk Add-on for AWS
SQS-based S3 inputs for the Splunk Add-on for AWS
Before you begin configuring your CloudTrail inputs, be aware of the following behaviors:
- Create a single enabled CloudTrail modular input for each unique Simple Queue Service (SQS) > Simple Notification Service (SNS) > S3 bucket path. Multiple enabled modular inputs can cause conflicts when trying to delete SQS messages or S3 records that another modular input is attempting to access and parse. Be sure to disable or delete testing configurations before going to production.
- If you have multiple AWS regions from which you want to gather CloudTrail data, the Amazon Web Services best practice is that you configure a trail that applies to all regions in the AWS partition in which you are working. You can then set up one CloudTrail input to collect data from the centralized S3 bucket where log files from all the regions are stored.
Configure AWS services for the CloudTrail input¶
The Splunk Add-on for AWS collects events from an SQS that subscribes to the SNS notification events from CloudTrail. Configure CloudTrail to produce these notifications, then create an SQS in each region for the add-on to access them. The best practice for creating one CloudTrail configuration in one region in order to collect SQS messages of CloudTrail data from all regions, is to perform one of the following tasks:
- Configure one CloudTrail S3 bucket, separate SNS and SQS paths for each region, and configure S3 Event Notification to send to SNS.
- Configure a global CloudTrail, skip steps 3 through 6 below, and configure a Generic S3 input on the add-on to collect data directly from your AWS deployment’s S3 bucket.
Configure AWS services¶
- Enable CloudTrail. Follow the instructions in the AWS documentation. See http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html.
- Create an S3 bucket in which to store the CloudTrail events. Follow the AWS documentation to ensure the permissions for this bucket are correct. See http://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-s3-bucket-policy-for-cloudtrail.html.
- Enable SNS notifications. See: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/getting_notifications_top_level.html.
- Create a new SQS.
- If you are in the China region, explicitly grant
DeleteMessage
andSendMessage
permissions to the SQS that you just created. This step is not necessary in commercial regions. - Subscribe the SQS to the SNS notifications that you enabled in step 3.
- Grant IAM permissions to access the AWS account that the add-on uses to connect to your AWS environment. See Manage accounts for the Splunk Add-on for AWS for details.
Configure AWS permissions for the CloudTrail input¶
Required permissions for the S3 bucket that collects your CloudTrail logs:
Get*
List*
Delete*
Granting the delete permission is required to support the option to
remove log files when done collecting them with the add-on. If you set
this parameter to false
, you do not need to grant delete permissions.
Required permissions for the SQS subscribed to the S3 bucket that collects CloudTrail logs:
GetQueueAttributes
ListQueues
ReceiveMessage
GetQueueUrl
DeleteMessage
In the Resource section of the policy, specify the ARNs of the S3 buckets and SQS queues from which you want to collect data.
See the following sample inline policy to configure CloudTrail input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:GetQueueUrl",
"sqs:DeleteMessage",
"s3:Get*",
"s3:List*",
"s3:Delete*"
],
"Resource": [
"*"
]
}
]
}
For more information and sample policies, see these resources in the AWS documentation:
- For SQS, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/UsingIAM.html.
- For S3, see http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html.
Configure a CloudTrail input using Splunk Web¶
To configure inputs in Splunk Web:
- Click on Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > CloudTrail.
- Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
aws_account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your CloudTrail data. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
aws_region |
AWS Region | The AWS region that contains the log notification SQS queue. In inputs.conf, enter the region ID. See the AWS service endpoints. |
private_endpoint_enabled |
Use Private Endpoints | Check the checkbox to use private endpoints of AWS Security Token Service (STS) and AWS Simple Cloud Storage (S3) services for authentication and data collection. In inputs.conf, enter 0 or 1 to respectively disable or enable use of private endpoints. |
s3_private_endpoint_url |
Private Endpoint (S3) | Private Endpoint (Interface VPC Endpoint) of your S3 service, which can be configured from your AWS console. Supported Formats: <http/https>://bucket.vpce-<endpoint_id>-<unique_id>.s3.<region_id>.vpce.amazonaws.com <http/https>://bucket.vpce-<endpoint_id>-<unique_id>-<availability_zone>.s3.<region_id>.vpce.amazonaws.com |
sts_private_endpoint_url |
Private Endpoint (STS) | Private Endpoint (Interface VPC Endpoint) of your STS service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.sts.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sts.<region_id>.vpce.amazonaws.com |
sqs_queue |
SQS queue name | The name of the queue to which AWS sends new CloudTrail log notifications. In Splunk Web, you can select a queue from the drop-down list, if your account permissions allow you to list queues, or enter the queue name manually. The queue name is the final segment of the full queue URL. For example, if your SQS queue URL is http://sqs.us-east-1.amazonaws.com/123456789012/testQueue , then your SQS queue name is testQueue. |
sqs_private_endpoint_url |
Private Endpoint (SQS) | Private Endpoint (Interface VPC Endpoint) of your SQS service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.sqs.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sqs.<region_id>.vpce.amazonaws.com |
remove_files_when_done |
Remove logs when done | A Boolean value indicating whether the Splunk platform should delete log files from the S3 bucket after indexing is complete. The default is false. |
exclude_describe_events |
Exclude events | A Boolean value indicating whether or not to exclude certain events, such as read-only events that can produce a high volume of data. The default is true. |
blacklist |
Deny list for exclusion | A PCRE regular expression that specifies event names to exclude if exclude_describe_events is set to true. Leave blank to use the default regex ^(?:Describe|List|Get) . |
excluded_events_index |
Excluded events index | The name of the index in which the Splunk platform puts excluded events. The default is empty, which discards the events. |
interval |
Interval | The number of seconds to wait before the Splunk platform runs the command again. The default is 30 seconds. |
log_partitions |
n/a | Configure partitions of a log file to be ingested. This add-on searches the log files for <Region ID> and <Account ID> . For example, log_partitions = AWSLogs/<Account ID>/CloudTrail/<Region> . |
sourcetype |
Source type | A source type for the events. Enter a value only if you want to override the default of aws:cloudtrail . Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
index |
Index | The index name where the Splunk platform puts the CloudTrail data. The default is main. |
Configure a CloudTrail input using configuration files¶
To configure inputs manually in inputs.conf, create a stanza using the
following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_cloudtrail://<name>]
aws_account = <value>
aws_region = <value>
private_endpoint_enabled = <value>
sqs_queue = <value>
sqs_private_endpoint_url = <value>
s3_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
exclude_describe_events = <value>
remove_files_when_done = <value>
blacklist = <value>
excluded_events_index = <value>
interval = <value>
sourcetype = <value>
index = <value>
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_cloudtrail]
aws_account =
sourcetype = aws:cloudtrail
exclude_describe_events = true
remove_files_when_done = false
queueSize = 128KB
persistentQueueSize = 24MB
interval = 30
The values in default/inputs.conf correspond to the default values in
Splunk Web as well as some internal values that are not exposed in
Splunk Web for configuration. If you choose to copy this stanza to
/local
and use it as a starting point to configure your inputs.conf
manually, change the stanza title from aws_cloudtrail
to
aws_cloudtrail://<name>
.
Switch from a CloudTrail input to an SQS-based S3 input¶
The SQS-based S3 input is a more fault-tolerant and higher-performing alternative to the CloudTrail input for collecting CloudTrail data. If you are already collecting CloudTrail data using a CloudTrail input, you can configure an SQS-based S3 input and seamlessly switch to the new input for CloudTrail data collection with little disruption.
- Disable the CloudTrail input you are using to collect CloudTrail data.
- Set up a Dead-Letter Queue (DLQ) and the SQS visibility timeout setting for the SQS queue from which you are collecting CloudTrail data. See Configure SQS-based S3 inputs for the Splunk Add-on for AWS.
- Create an SQS-based S3 input, pointing to the SQS queue you configured in the last step. Configure SQS-based S3 inputs for the Splunk Add-on for AWS for the detailed configuration steps.
Once configured, the new SQS-based S3 input replaces the old CloudTrail input to collect CloudTrail data from the same SQS queue.
Configure CloudWatch inputs for the Splunk Add-on for AWS¶
Starting in version 7.1.0 of the Splunk Add-on for AWS, the file based checkpoint mechanism was migrated to the Splunk KV Store for Cloudwatch Metrics inputs. The CloudWatch metrics inputs must be disabled whenever you restart the Splunk software. Otherwise, it will result in data duplication against your already configured inputs.
Complete the steps to configure CloudWatch and EventBridge inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the CloudWatch input.
- Configure AWS permissions for the CloudWatch input.
- (Optional) Configure VPC Interface Endpoints for STS, monitoring, ELB, EC2, Autoscaling, Lambda and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. Configuration of all service endpoints is not required. Configure only those endpoints which are required for each specific metric. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure CloudWatch inputs either through Splunk Web or configuration files.
Configure separate CloudWatch inputs for each metric or set of metrics that have different minimum granularities, based on the sampling period that AWS allows for that metric. For example, CPU Utilization has a sampling period of 5 minutes, whereas Billing Estimated Charge has a sampling period of 4 hours. If you configure a granularity that is smaller than the minimum sampling period available in AWS, the input wastes API calls.
For more information, see Sizing, performance, and cost considerations for the Splunk Add-on for AWS.
To improve the data collection of CloudWatch input, Configure the CloudWatch Max Threads parameter from Add-on Global Settings. For more information, see Add-on Global Settings.
Configure AWS services for the CloudWatch input¶
To enable AWS to produce billing metrics in CloudWatch, turn on Receive Billing Alerts in the Preferences section of the Billing and Cost Management console.
The CloudWatch service is automatically enabled to collect free metrics for your AWS services and requires no additional configuration for the Splunk Add-on for AWS.
Configure CloudWatch permissions¶
Required permissions for CloudWatch: Describe*, Get*, List*
Required permissions for Autoscaling: Describe*
Required permissions for EC2: Describe*
Required permissions for S3: List*
Required permissions for SQS: List*
Required permissions for SNS: List*
Required permissions for Lambda: List*
Required permissions for ELB: Describe*
See the following sample inline policy to configure CloudWatch input permissions:
{
"Statement": [{
"Action": [
"cloudwatch:List*",
"cloudwatch:Get*",
"autoscaling:Describe*",
"ec2:Describe*",
"s3:List*",
"sqs:List*",
"sns:List*",
"lambda:List*",
"elasticloadbalancing:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}],
"Version": "2012-10-17"
}
For more information and sample policies, see: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingIAM.html
Configure a CloudWatch input using Splunk Web¶
To configure inputs in Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > CloudWatch.
- Click Advanced to edit Metrics Configuration.
- Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
aws_account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your CloudWatch data. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
aws_iam_role |
Assume Role | The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
aws_region |
AWS Regions | The AWS region name or names. In Splunk Web, select one or more regions from the drop-down list. In inputs.conf, enter one or more valid AWS region IDs, separated by commas. See the AWS service endpoints for more information. |
interval |
Interval | The number of seconds to wait before the Splunk platform runs the command again. Set polling interval using the interval parameter according to your requirements in the inputs.conf file. Default value is 300 or 5 mins |
Metrics Configuration arguments | ||
metric_dimensions |
Dimensions | CloudWatch metric dimensions display as a JSON array, with strings as keys and regular expressions as values. Splunk Web automatically populates correctly formatted JSON objects to collect all metric dimensions in the namespace you have selected. If you want, you can customize the JSON object to limit the collection to just the dimensions you want to collect. For example, for the SQS namespace, you can collect only the metrics for queue names that start with “splunk” and end with “_current” by entering [{"QueueName": ["\"splunk.*_current\\\\s\""]}] .You can set multiple dimensions in one data input. If you use a JSON array, the dimension matched by the JSON object in the array is matched. A JSON object has strings as keys and values that are either a regex or an array of regexes. The Splunk Add-on for AWS supports one JSON object per JSON array. For example [{"key1": "regex1"}, {"key2": "regex2"}] is not supported. A dimension is matched to the object if it meets these two conditions:
For example, [{"key":["val.*", ".*lue"]}] matches {"key":"value"} and {"key":["value"]} , but not {"key":"value", "key2":"value2"} .The BucketName dimension does not support wildcards or arrays with length greater than one. When you collect metrics from the AWS S3 namespace, configure separate CloudWatch inputs for each S3 bucket. For example, {"StorageType": ["StandardStorage"], "BucketName": ["my_favorite_bucket"]} . |
metric_names |
Metrics | CloudWatch metric names in JSON array. For example, ["CPUUtilization","DiskReadOps","StatusCheckFailed_System"] . Splunk Web automatically populates correctly formatted JSON objects for all metric names in the namespace you have selected. Edit the JSON object to remove any metrics you do not want to collect. Collecting metrics you do not need results in unnecessary API calls. |
metric_namespace |
Namespace | The metric namespace. For example, AWS/EBS . In Splunk Web, click + Add Namespace and select a namespace from the drop-down list or manually enter it. If you manually enter a custom namespace, you need to type in all your JSON objects manually for the remaining fields. In inputs.conf, enter a valid namespace for the region you specified. You can only specify one metric namespace per input. |
metric_expiration |
Metric Expiration | Duration of time the discovered metrics are cached for, measured in seconds. |
index |
Index | The index name where the Splunk platform puts the CloudWatch data. The default is main. |
period |
Period | The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as 60 seconds (1 minute) and must be a multiple of 60. Different AWS metrics can support different minimum granularity based on the sampling period that AWS allows for that metric. For example, CPUUtilization has a sampling period of 5 minutes, whereas Billing Estimated Charge has a sampling period of 4 hours. Do not configure a granularity that is less than the allowed sampling period for the selected metric, or else the reported granularity reflects the sampling granularity but is labeled with your configured granularity, resulting in inconsistent data. The smaller your granularity, the more precise your metrics data becomes. Configuring a small granularity is useful when you want to do precise analysis of metrics and you are not concerned about limiting your data volume. Configure a larger granularity when a broader view is acceptable or you want to limit the amount of data you collect from AWS. |
private_endpoint_enabled |
Use Private Endpoints | Check the checkbox to use private endpoints of AWS Security TokenService (STS) and AWS Simple Cloud Storage (S3) services for authentication and data collection. In inputs.conf, enter 0 or 1 to respectively disable or enable use of private endpoints. |
sts_private_endpoint_url |
Private Endpoint (STS) | Private Endpoint (Interface VPC Endpoint) of your STS service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.sts.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sts.<region_id>.vpce.amazonaws.com |
monitoring_private_endpoint_url |
Private Endpoint (Monitoring) | Private Endpoint (Interface VPC Endpoint) of your monitoring service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.monitoring.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.monitoring.<region_id>.vpce.amazonaws.com |
elb_private_endpoint_url |
Private Endpoint (ELB) | Private Endpoint (Interface VPC Endpoint) of your Elastic Load Balancer (ELB) service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.elasticloadbalancing.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.elasticloadbalancing.<region_id>.vpce.amazonaws.com |
ec2_private_endpoint_url |
Private Endpoint (EC2) | Private Endpoint (Interface VPC Endpoint) of your Elastic Compute Cloud (EC2) service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.ec2.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.ec2.<region_id>.vpce.amazonaws.com |
autoscaling_private_endpoint_url |
Private Endpoint (Autoscaling) | Private Endpoint (Interface VPC Endpoint) of your Autoscaling service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.autoscaling.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.autoscaling.<region_id>.vpce.amazonaws.com |
lambda_private_endpoint_url |
Private Endpoint (Lambda) | Private Endpoint (Interface VPC Endpoint) of your Lambda service, which can be configured from your AWS console. Supported Formats: <http/https>://vpce-<endpoint_id>-<unique_id>.lambda.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.lambda.<region_id>.vpce.amazonaws.com |
s3_private_endpoint_url |
Private Endpoint (S3) | Private Endpoint (Interface VPC Endpoint) of your S3 service, which can be configured from your AWS console. Supported Formats: <http/https>://bucket.vpce-<endpoint_id>-<unique_id>.s3.<region_id>.vpce.amazonaws.com <http/https>://bucket.vpce-<endpoint_id>-<unique_id>-<availability_zone>.s3.<region_id>.vpce.amazonaws.com |
query_window_size |
Query Window Size | Window of time used to determine how far back in time to go in order to retrieve data points, measured in number of data points. |
statistics |
Metric statistics | The metric statistics you want to request. Choose from Average, Sum, SampleCount, Maximum, Minimum . In inputs.conf, this list must be JSON encoded. For example: ["Average","Sum","SampleCount","Maximum","Minimum"] . |
sourcetype |
Source type | A source type for the events. Enter a value if you want to override the default of aws:cloudwatch. Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
Configure a CloudWatch input using configuration files¶
To configure inputs manually in inputs.conf, create a stanza using the
following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_cloudwatch://<name>]
aws_account = <value>
aws_iam_role=<value>
aws_region = <value>
metric_namespace = <value>
metric_names = <value>
metric_dimensions = <value>
private_endpoint_enabled = <value>
sts_private_endpoint_url = <value>
s3_private_endpoint_url = <value>
autoscaling_private_endpoint_url = <value>
ec2_private_endpoint_url = <value>
elb_private_endpoint_url = <value>
monitoring_private_endpoint_url = <value>
lambda_private_endpoint_url = <value>
statistics = <value>
period = <value>
sourcetype = <value>
index = <value>
metric_expiration = <value>
query_window_size = <value>
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_cloudwatch]
start_by_shell = false
sourcetype = aws:cloudwatch
use_metric_format = false
metric_expiration = 3600
query_window_size = 24
interval = 300
python.version = python3
The previous values correspond to the default values in Splunk Web as
well as some internal values that are not exposed in Splunk Web for
configuration. If you choose to copy this stanza to /local
and use it
as a starting point to configure your inputs.conf manually, change the
stanza title from aws_cloudwatch
to aws_cloudwatch://<name>
.
If you want to change the interval, copy the [aws_cloudwatch
] stanza
to the local/inputs.conf file then set the interval value as you want.
It will override the default value set in default/inputs.conf
.
Send CloudWatch events to a metrics index¶
Configure the Splunk Add-on for AWS to collect CloudWatch events and send them to a metrics index.
Prerequisites¶
- Splunk Enterprise version 7.2 and higher.
- An existing metrics index. See Get started with metrics in the Splunk Enterprise Metrics manual to learn more about creating a metrics index.
- In Splunk Web, click Splunk Add-on for AWS in the left navigation bar on Splunk Web home.
- Click Create New Input > CloudWatch.
- In the AWS Input Configuration section, populate the Name, AWS Account, Assume Role, and AWS Regions fields, using the previous table as a reference.
- Navigate to the Splunk-related Configuration section.
- In the Source Type field, type
aws:cloudwatch:metric
. - Click on the Index dropdown menu, and type the name of your metrics index.
- Click Save.
Configure CloudWatch Log inputs for the Splunk Add-on for AWS¶
Complete the steps to configure CloudWatch Log inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the CloudWatch Log input.
- Configure AWS permissions for the CloudWatch Log input.
- (Optional) Configure VPC Interface Endpoints for STS and logs services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure CloudWatch Log inputs either through Splunk Web or configuration files.
Due to rate limitations, don’t use pull-based (API) input configurations
to collect CloudWatch Log data which has the source type
aws:cloudwatchlogs:*
. Instead, use push-based (Amazon Kinesis
Firehose) input configurations to collect CloudWatch Log and VPC Flow
Logs. The push-based (Amazon Kinesis Firehose) input configurations for
the Splunk Add-on for AWS include index-time logic to perform the
correct knowledge extraction for these events through the Kinesis input
as well.
Configure AWS permissions for the CloudWatch Log input¶
Required permissions for Logs:
DescribeLogGroups
DescribeLogStreams
GetLogEvents
s3:GetBucketLocation
See the following sample inline policy to configure CloudWatch Log input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
You must also ensure that your role has a trust relationship that allows the flow logs service to assume the role. While viewing the IAM role, choose Edit Trust Relationship and replace that policy with this one:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Configure a CloudWatch Logs input using Splunk Web¶
To configure inputs using Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on the data type you want to collect:
- Create New Input > VPC Flow Logs > CloudWatch Logs
- Create New Input > Custom Data Type > CloudWatch Logs
Fill out the fields as described in the table:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to
access your CloudWatch Logs data. In Splunk Web, select an account from
the drop-down list. In |
|
AWS Region |
The AWS region that contains the data. In
|
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (Logs) |
Private Endpoint (Interface VPC Endpoint) of your logs service,
which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
|
Log group |
A comma-separated list of log group names. |
|
Only After |
GMT time string in '%Y-%m-%dT%H:%M:%S' format. If set, only events after this time are queried and indexed. Defaults to 1970-01-01T00:00:00. |
|
Stream Matching Regex |
REGEX to strictly match stream names. Defaults to
|
|
Interval |
The number of seconds to wait before the Splunk platform runs the command again. The default is 600 seconds. |
|
Use Metric Index? |
Whether to use metric index or event index. The default value is No (use event index). This field is only visible when creating VPC Flow Logs -> CloudWatch Logs inputs. |
|
Source type |
A source type for the events.
aws:cloudwatchlogs if you are
collecting any other types of CloudWatch Logs data. |
|
Index |
The index name where the Splunk platform puts the CloudWatch Logs data. The default is main. |
|
Query Window Size (minutes) |
Specify the interval of data to be collected in each request(in minutes). Default=10, Min=1 & Max=43200(30days). For example, if the calculated start date is 2024-01-01T00:00:00 (midnight on January 1, 2024) and query window size is 60 minutes, then end date for the request will be 2024-01-01T00:01:00 (one hour after midnight). The time period will continue sliding by 60 minutes until no more recent logs are available.. |
Configure a CloudWatch Logs input using configuration files¶
To configure the input using configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_cloudwatch_logs_tasks.conf
using the following template:
[<name>]
account = <value>
groups = <value>
index = <value>
interval = <value>
only_after = <value>
region = <value>
private_endpoint_enabled = <value>
logs_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
sourcetype = <value>
stream_matcher = <value>
metric_index_flag = <value>
query_window_size = <value>
Here is an example stanza that collects VPC Flow Log data from two log groups:
[splunkapp2:us-west-2]
account = splunkapp2
groups = SomeName/DefaultLogGroup, SomeOtherName/SomeOtherLogGroup
index = default
interval = 600
only_after = 1970-01-01T00:00:00
region = us-west-2
sourcetype = aws:cloudwatchlogs:vpcflow
stream_matcher = eni.*
metric_index_flag = 0
query_window_size = 10
Configure Description inputs for the Splunk Add-on for AWS¶
The Description input has been deprecated starting in version 6.2.0 of the Splunk Add-on for AWS. The Metadata input has been added as a replacement. To continue data collection for Description move your workloads to the Metadata input.
For more information, see the Configure the Metadata input for collecting Description data topic in this manual.
Configure Incremental S3 inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Incremental S3 inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Incremental S3 input.
- Configure AWS permissions for the Incremental S3 input.
- (Optional) Configure VPC Interface Endpoints for STS and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure Incremental S3 inputs either through Splunk Web or configuration files.
From version 4.3.0 and higher, the Splunk Add-on for AWS provides the Simple Queue Service (SQS)-based S3 input, which is a more scalable and higher-performing alternative to the generic S3 and incremental S3 input types for collecting various types of log files from S3 buckets. For new inputs for collecting a variety of predefined and custom data types, consider using the SQS-based S3 input instead.
The incremental S3 input only lists and retrieves objects that have not been ingested from a bucket by comparing datetime information included in filenames against checkpoint record, which significantly improves ingestion performance.
Configure AWS services for the Incremental S3 input¶
To collect access logs, configure logging in the AWS console to collect the logs in a dedicated S3 bucket. See the AWS documentation for more information on how to configure access logs:
- For S3 access logs, see http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html.
- Enable ELB access logs, see http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-access-logs.html.
- Enable CloudFront access logs, see http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html.
See http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html for more information about how to configure S3 buckets and objects.
Configure S3 permissions¶
Required permissions for S3 buckets and objects:
ListBucket
GetObject
ListAllMyBuckets
GetBucketLocation
Required permissions for KMS:
Decrypt
In the Resource section of the policy, specify the Amazon Resource Names (ARNs) of the S3 buckets from which you want to collect S3 Access Logs, CloudFront Access Logs, ELB Access Logs, or generic S3 log data.
See the following sample inline policy to configure S3 input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"kms:Decrypt"
],
"Resource": "*"
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.
Configure an Incremental S3 input using Splunk Web¶
To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:
- Create New Input > CloudTrail > Incremental S3
- Create New Input > CloudFront Access Log > Incremental S3
- Create New Input > ELB Access Logs > Incremental S3
- Create New Input > S3 Access Logs > Incremental S3
Make sure you choose the right menu path corresponding to the data type you want to collect. The system automatically sets the appropriate source type and may display slightly different field settings in the subsequent configuration page based on the menu path.
Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to
access the keys in your S3 buckets. In Splunk Web, select an account
from the drop-down list. In inputs.conf, enter the friendly name of one
of the AWS accounts that you configured on the Configuration page or the
name of the automatically discovered EC2 IAM role. |
|
Assume Role |
The IAM role to assume, see The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
|
AWS Region (Optional) |
The AWS region that contains your bucket. In inputs.conf, enter
the region ID. |
|
S3 Bucket |
The AWS bucket name. |
|
Log File Prefix |
Configure the prefix of the log file, which along with other path
elements, forms the URL under which the Splunk Add-on for AWS searches
the log files.
Under one AWS account, to ingest logs in different prefixed locations in the bucket, you need to configure multiple AWS data inputs, one for each prefix name. Alternatively, you can configure one data input but use different AWS accounts to ingest logs in different prefixed locations in the bucket. |
|
Log Type |
The type of logs to ingest. Available log types are
|
|
Log Start Date |
The start date of the log. |
|
Distribution ID |
CloudFront distribution ID. This field is displayed only when you access the input configuration page through the Create New Input > CloudFront Access Log > Incremental S3 menu path. |
|
Log Path Format |
CloudTrail Log Path Format. This field is displayed when you
access the input configuration page by navigating to Create New
Input, thenCloudTrail, thenIncremental
S3. |
|
Source Type |
Source type for the events. This value is automatically set for the type of logs you want to collect based on the menu path you chose to access this configuration page. |
|
Index |
The index name where the Splunk platform puts the S3 data. The default is main. |
|
Interval |
The number of seconds to wait before splunkd checks the health of the modular input so that it can trigger a restart if the input crashes. The default is 30 seconds. |
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (S3) |
Private Endpoint (Interface VPC Endpoint) of your S3 service,
which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
Configure an Incremental S3 input using a configuration file¶
When you configure inputs manually in inputs.conf, create a stanza using
the following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[splunk_ta_aws_logs://<name>]
log_type =
aws_account =
[splunk_ta_aws_logs://<name>]
aws_s3_region = <value>
host_name =
bucket_name =
bucket_region =
log_file_prefix =
log_start_date =
log_name_format =
log_path_format =
aws_iam_role = AWS IAM role that to be assumed.
max_retries = @integer:[-1, 1000]. default is -1. -1 means retry until success.
max_fails = @integer: [0, 10000]. default is 10000. Stop discovering new keys if the number of failed files exceeded the max_fails.
max_number_of_process = @integer:[1, 64]. default is 2.
max_number_of_thread = @integer:[1, 64]. default is 4.
private_endpoint_enabled = <value>
s3_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
Configure Inspector inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Inspector inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Inspector input.
- Configure AWS permissions for the Inspector input.
- Configure Inspector inputs either through Splunk Web or configuration files.
Configure Amazon Inspector permissions¶
You need these required permissions for Inspector:
Describe*
List*
See the following sample inline policy to configure Inspector input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"inspector:Describe*",
"inspector:List*"
],
"Resource": "*"
}
]
}
For more information, see http://docs.aws.amazon.com/IAM/latest/UserGuide/list_inspector.html.
Configure an Inspector input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Inspector.
- Use the following table to complete the fields for the new input in Splunk Web or in the .conf file:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your Inspector findings. In Splunk Web, select an account from the drop-down list. In aws_inspector_tasks.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
regions |
AWS Region | The AWS region that contains the data. In aws_inspector_tasks.conf, enter region IDs in a comma-separated list. |
sourcetype |
Source type | A source type for the events. Enter a value only if you want to override the default of aws:inspector . Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
index |
Index | The index name where the Splunk platform puts the Inspector findings. The default is main. |
polling_interval |
Polling interval | The number of seconds to wait before the Splunk platform runs the command again. The default is 300 seconds. |
Configure Kinesis inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Kinesis inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Kinesis input.
- Configure AWS permissions for the Kinesis input.
- (Optional) Configure VPC Interface Endpoints for STS and Kinesis services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure Kinesis inputs either through Splunk Web or configuration files.
Kinesis is the recommended input type for collecting VPC Flow Logs. This input type also supports the collection of custom data types through Kinesis streams.
This data source is available only in a subset of AWS regions. For a full list of supported regions, see https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/.
The Kinesis data input only supports gzip compression or plaintext data. It cannot ingest data with other encodings, nor can it ingest data with a mix of gzip and plaintext in the same input. Create separate Kinesis inputs for gzip data and plaintext data.
See the Performance for the Kinesis input in the Splunk Add-on for AWS section of this page for reference data to enhance the performance of your own Kinesis data collection task.
Configure AWS permissions for the Kinesis input¶
Required permission for Amazon Kinesis:
Get*
DescribeStream
ListStreams
See the following sample inline policy to configure Kinesis input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kinesis:Get*",
"kinesis:DescribeStream",
"kinesis:ListStreams"
],
"Resource": "*"
}
]
}
Configure a Kinesis input using Splunk Web¶
To configure inputs in Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Choose one of the following menu paths depending on which data type you want to collect:
- Create New Input > VPC Flow Logs > Kinesis
- Create New Input > Others > Kinesis
Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to access your Kinesis data. In Splunk Web, select an account from the drop-down list. In aws_kinesis_tasks.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
|
Assume Role |
The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
|
AWS Region |
The AWS region that contains the Kinesis streams. In aws_kinesis_tasks.conf, enter the region ID. See https://docs.aws.amazon.com/general/latest/gr/rande.html#d0e371. |
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (S3) |
Private Endpoint (Interface VPC Endpoint) of your Kinesis
service, which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
|
Stream Names |
The Kinesis stream name |
|
Encoding |
The encoding of the stream data. Set to |
|
Initial Stream Position |
LATEST or TRIM_HORIZON. LATEST starts data collection from the point the input is enabled. TRIM_HORIZON starts collecting with the oldest data record. |
|
Record Format |
CloudWatchLogs or none. If you choose CloudWatchLogs, this add-on parses the data in CloudWatchLogs format. |
|
Use Metric Index? |
Whether to use metric index or event index. The default value is No (use event index). This field is only visible when creating VPC Flow Logs -> Kinesis inputs. |
|
Source type |
A source type for the events.
aws:kinesis if you are
collecting any other Kinesis data. |
|
Index |
The index name where the Splunk platform puts the Kinesis data. The default is main. |
Configure a Kinesis input using configuration files¶
To configure the input using configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_kinesis_tasks.conf
using
the following template:
[<name>]
account = <value>
aws_iam_role=<value>
region = <value>
private_endpoint_enabled = <value>
kinesis_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
stream_names = <value>
encoding = <value>
init_stream_position = <value>
format = <value>
sourcetype = <value>
index = <value>
metric_index_flag = <value>
Here is an example stanza that collects Kinesis data for all streams available in the region:
[splunkapp2:us-east-1]
account = splunkapp2
region = us-east-1
encoding =
init_stream_position = LATEST
index = aws
format = CloudWatchLogs
sourcetype = aws:kinesis
metric_index_flag = 0
Performance for the Kinesis input in the Splunk Add-on for AWS¶
This page provides the reference information about the performance testing of the Kinesis input in Splunk Add-on for AWS. The testing was performed on version 4.0.0, when the Kinesis input was first introduced. You can use this information to enhance the performance of your own Kinesis data collection tasks.
Many factors impact performance results, including file size, file compression, event size, deployment architecture, and hardware. These results represent reference information and do not represent performance in all environments.
Summary¶
While results in different environments will vary, the performance testing of the Kinesis input showed the following:
- Each Kinesis input can handle up to 6 MB/s of data, with a daily ingestion volume of 500 GB.
- More shards can slightly improve the performance. Three shards are recommended for large streams.
Testing architecture¶
Splunk tested the performance of the Kinesis input using a single-instance Splunk Enterprise 6.4.0 on an m4.4xlarge AWS EC2 instance to ensure CPU, memory, storage, and network did not introduce any bottlenecks. See the following instance specs:
Instance type | M4 Quadruple Extra Large (m4.4xlarge) |
Memory | 64 GB |
ECU | 53.5 |
Cores | 16 |
Storage | 0 GB (EBS only) |
Architecture | 64-bit |
Network performance | High |
EBS Optimized: Max Bandwidth | 250 MB/s |
Test scenario¶
Splunk tested the following parameters to target the use case of high-volume VPC flow logs ingested through a Kinesis stream:
- Shard numbers: 3, 5, and 10 shards
- Event size: 120 bytes per event
- Number of events: 20,000,000
- Compression: gzip
- Initial stream position: TRIM_HORIZON
AWS reports that each shard is limited to 5 read transactions per second, up to a maximum read rate of 2 MB per second. Thus, with 10 shards, the theoretical upper limit is 20 MB per second.
Test results¶
Splunk observed a data ingestion rate of 6 million events per minute at peak, which is 100,000 events per second. Because each event is 120 bytes, this indicates a maximum throughput of 10 MB/s.
Splunk observed an average throughput of 6 MB/s for a single Kinesis modular input, or a daily ingestion throughput of approximately 500 GB.
After reducing the shard number from 10 shards to 3 shards, Splunk observed a throughput downgrade of approximately 10%.
During testing, Splunk observed the following resource usage on the instance:
- Normalized CPU usage of approximately 30%
- Python memory usage of approximately 700 MB
The indexer is the largest consumer of CPU, and the modular input is the largest consumer of memory.
AWS throws a ProvisionedThroughputExceededException if a call returns 10 MB of data and subsequent calls are made within the next 5 seconds. Splunk observed this error while testing with three shards only every 1 to 5 minutes.
Configure Generic S3 inputs for the Splunk Add-on for AWS¶
Versions 6.2.0 and higher of the Splunk Add-on for AWS includes a UI warning message when configuring a new Generic S3 input or editing/cloning an existing input. A warning message will also be logged while the data input is enabled.
Complete the steps to configure Generic S3 inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Generic S3 input.
- Configure AWS permissions for the Generic S3 input.
- (Optional) Configure VPC Interface Endpoints for STS and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure Generic S3 inputs either through Splunk Web or configuration files.
Configuration prerequisites¶
The Generic S3 input lists all the objects in the bucket and examines each file’s modified date every time it runs to pull uncollected data from an S3 bucket. When the number of objects in a bucket is large, this can be a very time-consuming process with low throughput.
Before you begin configuring your Generic S3 inputs, be aware of the following expected behaviors:
- You cannot edit the initial scan time parameter of an S3 input after you create it. If you need to adjust the start time of an S3 input, delete it and recreate it.
- The S3 data input is not intended to read frequently modified files. If a file is modified after it has been indexed, the Splunk platform indexes the file again, resulting in duplicated data. Use key, blocklist, and allowlist options to instruct the add-on to index only those files that you know will not be modified later.
- The S3 data input processes compressed files according to their
suffixes. Use these suffixes only if the file is in the
corresponding format, or data processing errors occur. The data
input supports the following compression types:
- Single or multiple files, with or without folders, in ZIP, GZIP, TAR, or TAR.GZ formats.
Expanding compressed files requires significant operating system resources. The Splunk platform automatically detects the character set used in your files among these options:
-
The Generic S3 custom data types input processes delimited files (.csv, .psv, .tsv, single space-separated) according to the status of the field parse_csv_with_header and parse_csv_with_delimiter. The data input supports the following compression types:
- Single or multiple files with or without folders in ZIP, GZIP, TAR or TAR.GZ formats.
- CSV parsing within TAR might fail, during the following
scenarios:
- CSV parsing within a TAR might fail if binary files (._) exist within the TAR. In a tar file created in Mac OS there will be binary files (._) packaged with your csv files. They will not be processed and will throw an error.
Delimited Files parsing prerequisites if
parse_csv_with_header
is enabled-
The Generic S3 custom data types input processes Delimited Files (.csv, .psv, .tsv, single space-seperated) according to the status of the fields parse_csv_with_header and parse_csv_with_delimiter.
- When parse_csv_with_header is enabled, all files ingested by the input, whether delimited or not, will be processed as if they were delimited files with the value of parse_csv_with_delimiter used to split the fields. The first line of each file will be considered the header.
- When parse_csv_with_header is disabled, events will be indexed line by line without any CSV processing.
The field parse_csv_with_delimiter will be a comma by default, but can be edited to any delimiter of one character that is not alphanumeric, single, or double quote. - Ensure that each delimited file contains a header. The CSV parsing functionality will take the first non-empty line of the file as a header before parsing. - Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed. - Ensure there are no duplicate values in the header of the CSV file(s) to avoid missing data. - Set polling interval to default 1800 or higher to avoid data duplication/incorrect parsing or CSV file data. - There are some illegal sequences of string characters that will throw an UnicodeDecodeError. - For example,
VI
,Visa
,Cabela�s
Processing outcomes
- End result after CSV parsing will be a JSON object with the header values mapped to the subsequent row values.
-
The Splunk platform auto-detects the character set used in your files among these options:
- UTF-8 with or without BOM
- UTF-16LE/BE with BOM
- UTF-32BE/LE with BOM.
If your S3 key uses a different character set, you can specify it in
inputs.conf using the
character_set
parameter, and separate out this collection job into its own input. Mixing non-autodetected character sets in a single input causes errors.
-
If your S3 bucket contains a very large number of files, you can configure multiple S3 inputs for a single S3 bucket to improve performance. The Splunk platform dedicates one process for each data input, so provided that your system has sufficient processing power, performance improves with multiple inputs. See “Performance for the Splunk Add-on for AWS data inputs” in Sizing, Performance, and Cost Considerations for the Splunk Add-on for AWS for details.
To prevent indexing duplicate data, verify that multiple inputs do not collect the same S3 folder and file data.
-
As a best practice, archive your S3 bucket contents when you no longer need to actively collect them. AWS charges for list key API calls that the input uses to scan your buckets for new and changed files so you can reduce costs and improve performance by archiving older S3 keys to another bucket or storage type.
-
After configuring an S3 input, you may need to wait for a few minutes before new events are ingested and can be searched. The wait time depends on the number of files in the S3 buckets from which you are collecting data. The larger the quantity files, the longer the delay. Also, more verbose logging levels causes longer data digestion time.
Configure AWS services for the Generic S3 input¶
To collect access logs, configure logging in the AWS console to collect the logs in a dedicated S3 bucket. See the AWS documentation for more information on how to configure access logs:
- For S3 access logs, see http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html.
- For ELB access logs, see http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-access-logs.html.
- For CloudFront access logs, see http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html.
See http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html for more information about how to configure S3 buckets and objects.
Refer to the AWS S3 documentation for more information about how to configure S3 buckets and objects: http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html
Configure S3 permissions¶
Required permissions for S3 buckets and objects:
ListBucket
GetObject
ListAllMyBuckets
GetBucketLocation
Required permissions for KMS:
Decrypt
In the Resource section of the policy, specify the Amazon Resource Names (ARNs) of the S3 buckets from which you want to collect S3 Access Logs, CloudFront Access Logs, ELB Access Logs, or generic S3 log data.
See the following sample inline policy to configure S3 input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"kms:Decrypt"
],
"Resource": "*"
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.
Configure a Generic S3 input using Splunk Web¶
To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:
- Create New Input > CloudTrail > Generic S3
- Create New Input > CloudFront Access Log > Generic S3
- Create New Input > ELB Access Logs > Generic S3
- Create New Input > S3 Access Logs > Generic S3
- Create New Input > Custom Data Type > Generic S3
- Create New Input > Custom Data Type > Generic S3 >
aws:s3:csv
sourcetype
Make sure you choose the right menu path corresponding to the data type you want to collect. The system automatically sets the appropriate source type and may display slightly different field settings in the subsequent configuration page based on the menu path.
Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to
access the keys in your S3 buckets. In Splunk Web, select an account
from the drop-down list. In inputs.conf, enter the friendly name of one
of the AWS accounts that you configured on the Configuration page or the
name of the automatically discovered EC2 IAM role. |
|
Assume Role |
The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
|
AWS Region (Optional) |
The AWS region that contains your bucket. In inputs.conf, enter
the region ID. |
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (S3) |
Private Endpoint (Interface VPC Endpoint) of your S3 service,
which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
|
S3 Bucket |
The AWS bucket name. |
|
Log File Prefix/S3 Key Prefix |
Configure the prefix of the log file. This add-on searches the log files under this prefix. This argument is titled Log File Prefix in incremental S3 field inputs, and is titled S3 Key Prefix in generic S3 field inputs. |
|
Start Date/Time |
The start date of the log. |
|
End Date/Time |
The end date of the log. |
|
Source Type |
A source type for the events. Specify only if you want to
override the default of |
|
Index |
The index name where the Splunk platform puts the S3 data. The default is main. |
|
CloudTrail Event Blacklist |
Only valid if the source type is set to
|
|
CloudTrail Event Blacklist |
A regular expression to indicate the S3 paths that the Splunk
platform should exclude from scanning. Regex should match the full path.
For example, a regex to exclude |
|
Polling Interval |
The number of seconds to wait before the Splunk platform runs the command again. The default is 1,800 seconds. |
|
Parse all files as CSV |
If selected, all files will be parsed as a delimited file with the first line of each file considered the header. Set this checkbox to disabled for delimited files without a header. For new Generic S3 inputs, this feature is disabled, by default. Supported Formats:
|
|
CSV field delimiter |
Delimiter must be one character. The character cannot be alphanumeric, single quote, or double quote. By default, the delimiter is a comma.
|
Configure a Generic S3 input using configuration files¶
When you configure inputs manually in inputs.conf
, create a stanza
using the following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_s3://<name>]
is_secure = <whether use secure connection to AWS>
host_name = <the host name of the S3 service>
aws_account = <AWS account used to connect to AWS>
aws_s3_region = <value>
private_endpoint_enabled = <value>
s3_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
bucket_name = <S3 bucket name>
polling_interval = <Polling interval for statistics>
key_name = <S3 key prefix>. For example, key_name = cloudtrail. This value does not accept regex.
recursion_depth = <For folder keys, -1 == unconstrained>
initial_scan_datetime = <Splunk relative time>
terminal_scan_datetime = <Only S3 keys which have been modified before this datetime will be considered. Using datetime format: %Y-%m-%dT%H:%M:%S%z (for example, 2011-07-06T21:54:23-0700).>
log_partitions = AWSLogs/<Account ID>/CloudTrail/<Region>
max_items = <Max trackable items.>
max_retries = <Max number of retry attempts to stream incomplete items.>
whitelist = <Override regex for allow list when using a folder key.>. A regular expression to indicate the S3 paths that the Splunk platform should exclude from scanning. Regex should match the path, starting from folder name. For example, for including contents of a folder named Test, provide regex as Test/.*
blacklist = <Keys to ignore when using a folder key.>. Regex should match the full path. A regular expression to indicate the S3 paths that the Splunk platform should exclude from scanning. Regex should match the path, starting from folder name. For example, for excluding contents of a folder named Test, provide regex as Test/.*
ct_blacklist = <The allow list to exclude cloudtrail events. Only valid when manually set sourcetype=aws:cloudtrail.>
ct_excluded_events_index = <name of index to put excluded events into. default is empty, which discards the events>
aws_iam_role = <AWS IAM role to be assumed>
Under one AWS account, to ingest logs in different prefixed locations in the bucket, you need to configure multiple AWS data inputs, one for each prefix name. Alternatively, you can configure one data input but use different AWS accounts to ingest logs in different prefixed locations in the bucket.
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_s3]
aws_account =
sourcetype = aws:s3
initial_scan_datetime = default
log_partitions = AWSLogs/<Account ID>/CloudTrail/<Region>
max_items = 100000
max_retries = 3
polling_interval=
interval = 30
recursion_depth = -1
character_set = auto
is_secure = True
host_name = s3.amazonaws.com
ct_blacklist = ^(?:Describe|List|Get)
ct_excluded_events_index =
Configure SQS inputs for the Splunk Add-on for AWS¶
Complete the steps to configure SQS inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the SQS input.
- Configure AWS permissions for the SQS input.
- Configure SQS inputs either through Splunk Web or configuration files.
Configure AWS services for the SQS input¶
If you plan to use the SQS input, you must perform the following steps:
- Set up a dead-letter queue for the SQS queue to be used for the input for storing invalid messages. For information about SQS dead-letter queues and how to configure it, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html.
- Configure the SQS visibility timeout to prevent multiple inputs from receiving and processing messages in a queue more than once. Set your SQS visibility timeout to 5 minutes or longer. If the visibility timeout for a message is reached before the message is fully processed by the SQS input, the message reappears in the queue and is retrieved and processed again, resulting in duplicate data. For information about SQS visibility timeout and how to configure it, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html.
The SQS input supports only four queues collecting data in parallel. If more than four queues are configured, then only four queues will start their data collection. Other queues will have to wait until any of the running queues finish their data collection. As a result, if any queue takes a long time to drain all messages, data collection of other queues will not start until the long-haul input is finished. This can cause a delay in data collection for the SQS input.
Configure AWS permissions for the SQS input¶
Required permissions for Amazon SQS:
GetQueueAttributes
ListQueues
ReceiveMessage
GetQueueUrl
SendMessage
DeleteMessage
.
See the following sample inline policy to configure SQS input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:GetQueueUrl",
"sqs:SendMessage",
"sqs:DeleteMessage"
],
"Resource": [
"*"
]
}
]
}
Configure an SQS input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Custom Data Type > SQS.
- Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
aws_account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your SQS data. In Splunk Web, select an account from the drop-down list. In aws_sqs_tasks.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
aws_region |
AWS Region | The AWS region that contains the log notification SQS queue. In aws_sqs_tasks.conf, enter the region code. For example, the region code for the US East region is us-east-2. See AWS service endpoints. |
sqs_queues |
SQS queues | The name of the queue to which AWS sends new SQS log notifications. In Splunk Web, you can select a queue from the drop-down list, if your account permissions allow you to list queues, or enter the queue name manually. The queue name is the final segment of the full queue URL. For example, if your SQS queue URL is http://sqs.us-east-1.amazonaws.com/123456789012/testQueue , then your SQS queue name is testQueue .You can add multiple queues separated by commas. |
sourcetype |
Source type | A source type for the events. Enter a value only if you want to override the default of aws:sqs . Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
index |
Index | The index name where the Splunk platform puts the SQS data. The default is main. |
interval |
Interval | The number of seconds to wait before the Splunk platform runs the command again. The default is 30 seconds. |
Configure an SQS input using configuration files¶
To configure the input using configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_sqs_tasks.conf
using the
following template:
[<name>]
aws_account = <value>
aws_region = <value>
sqs_queues = <value>
index = <value>
sourcetype = <value>
interval = <value>
Configure SQS-based S3 inputs for the Splunk Add-on for AWS¶
Complete the steps to configure SQS-based S3 inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the SQS-based S3 input.
- Configure AWS permissions for the SQS-based S3 input.
- (Optional) Configure VPC Interface Endpoints for STS, SQS, and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure SQS-based S3 inputs either through Splunk Web or configuration files.
Configuration prerequisites¶
Delimited Files parsing prerequisites if parse_csv_with_header
is
enabled
- The SQS-based S3 custom data types input processes Delimited Files
(.csv, .psv, .tsv) according to the status of the fields
parse_csv_with_header
andparse_csv_with_delimiter
.- When
parse_csv_with_header
is enabled, all files ingested by the input, whether delimited or not, will be processed as if they were delimited files with the value ofparse_csv_with_delimiter
used to split the fields. The first line of each file will be considered the header. - When
parse_csv_with_header
is disabled, events will be indexed line by line without any CSV processing.
- When
- The field
parse_csv_with_delimiter
will be a comma by default, but can be edited to a different delimiter. This delimiter can be any character except alphanumeric, single, or double quote.
- This data input supports the following compression types:
- Single delimited files file OR delimited files files in ZIP, GZIP, TAR, or TAR.GZ formats.
- For example,
VI
,Visa
,Cabela�s
Processing outcomes
- End result after CSV parsing will be a JSON object with the header values mapped to the subsequent row values.
Configure AWS services for the SQS-based S3 input¶
Configure SQS-based S3 inputs to collect events¶
Configure SQS-based S3 inputs to collect the following events:
- CloudFront Access Logs
- Config
- ELB Access logs
- CloudTrail
- S3 Access Logs
- VPC Flow Logs
- Transit Gateway Flow Logs
- Custom data types
AWS service configuration prerequisites¶
Before you configure SQS-based S3 inputs, perform the following tasks:
- Create an SQS Queue to receive notifications and a second SQS Queue to serve as a dead letter queue.
- Create an SNS Topic.
- Configure S3 to send notifications for All object create events to an SNS Topic. This lets S3 notify the add-on that new events were written to the S3 bucket.
- Subscribe the main SQS Queue to the corresponding SNS Topic.
Best practices¶
Keep the following in mind as you configure your inputs:
- The SQS-based S3 input only collects in AWS service logs that meet
the following criteria:
- Near-real time
- Newly created
- Stored into S3 buckets
- Has event notifications sent to SQS
Events that occurred in the past, or events with no notifications sent through SNS to SQS end up in the Dead Letter Queue (DLQ), and no corresponding event is created by the Splunk Add-on for AWS. To collect historical logs stored into S3 buckets, use the generic S3 input instead. The S3 input lets you set the initial scan time parameter to collect data generated after a specified time in the past.
- To collect the same types of logs from multiple S3 buckets, even across regions, set up one input to collect data from all the buckets. To do this, configure these buckets to send notifications to the same SQS queue from which the SQS-based S3 input polls message.
- To achieve high throughput data ingestion from an S3 bucket, configure multiple SQS-based S3 inputs for the S3 bucket to scale out data collection.
- After configuring an SQS-based S3 input, you might need to wait for a few minutes before new events are ingested and can be searched. Also, a more verbose logging level causes longer data digestion time. Debug mode is extremely verbose and is not recommended on production systems.
- The SQS-based input allows you to ingest data from S3 buckets by optimizing the API calls made by the add-on and relying on SQS/SNS to collect events upon receipt of notification.
- The SQS-based S3 input is stateless, which means that when multiple inputs are collecting data from the same bucket, if one input goes down, the other inputs continue to collect data and take over the load from the failed input. This lets you enhance fault tolerance by configuring multiple inputs to collect data from the same bucket.
- The SQS-based S3 input supports signature validation. If S3 notifications are set up to send through SNS, AWS will create a signature for every message. The SQS-based S3 input will validate each message with the associated certificate, provided by AWS. For more information, see the Verifying the signatures of Amazon SNS messages topic in the AWS documentation.
- If any messages with a signature are received, all following messages will require valid SNS signatures, no matter your input’s SNS signature setting.
- Set up a Dead Letter Queue for the SQS queue to be used for the input for storing invalid messages. For information about SQS Dead Letter Queues and how to configure it, see the Amazon SQS dead-letter queues topic in the AWS documentation.
- Configure the SQS visibility timeout to prevent multiple inputs from receiving and processing messages in a queue more than once. Set your SQS visibility timeout to 5 minutes or longer. If the visibility timeout for a message is reached before the message is fully processed by the SQS-based S3 input, the message reappears in the queue and is retrieved and processed again, resulting in duplicate data.
For information about SQS visibility timeout and how to configure it, see the Amazon SQS visibility timeout topic in the AWS documentation.
Supported message types for the SQS-based S3 input¶
The following message types are supported by the SQS-based S3 input
ConfigurationHistoryDeliveryCompleted
ConfigurationSnapshotDeliveryCompleted
Configure AWS permissions for the SQS-based S3 input¶
Configure AWS permissions for SQS access¶
The following permissions are required for SQS access:
GetQueueUrl
ReceiveMessage
SendMessage
DeleteMessage
ChangeMessageVisibility
GetQueueAttributes
ListQueues
Required permissions for S3 buckets and objects:
GetObject
(if Bucket Versioning is disabled).GetObjectVersion
(if Bucket Versioning is enabled).
Required permissions for KMS:
Decrypt
See the following sample inline policy to configure input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:DeleteMessage",
"sqs:ChangeMessageVisibility",
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"s3:GetObject",
"s3:GetObjectVersion",
"kms:Decrypt"
],
"Resource": "*"
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.
Configure SNS policy to receive notifications from S3 buckets¶
See the following sample inline SNS policy to allow your S3 bucket to send notifications to an SNS topic.
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {"AWS":"*" },
"Action": ["SNS:Publish"],
"Resource": "<SNS-topic-ARN>",
"Condition": {"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:<bucket-name>" }}
}
]
}
For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.
Configure AWS services for SNS alerts¶
If you plan to use the SQS-based S3 input, you must enable Amazon S3 bucket events to send notification messages to an SQS queue whenever the events occur. This queue cannot be first-in-first-out. For instructions on setting up S3 bucket event notifications, see the AWS documentation: https://docs.aws.amazon.com/AmazonS3/latest/UG/SettingBucketNotifications.html http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Configure an SQS-based S3 input using Splunk Web¶
To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:
- Create New Input > CloudTrail > SQS-based S3
- Create New Input > CloudFront Access Log > SQS-based S3
- Create New Input > Config > SQS-based S3
- Create New Input > ELB Access Logs > SQS-based S3
- Create New Input > S3 Access Logs > SQS-based S3
- Create New Input > VPC Flow Logs > SQS-based S3
- Create New Input > Transit Gateway Flow Logs > SQS-based S3
- Create New Input > Custom Data Type > SQS-based S3
- Create New Input > Custom Data Type > SQS-based S3 >
Delimited Files
S3 File Decoder
You must have the admin_all_objects role enabled in order to add new inputs.
Choose the menu path that corresponds to the data type you want to collect. The system automatically sets the source type and display relevant field settings in the subsequent configuration page.
Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to
access the keys in your S3 buckets. In Splunk Web, select an account
from the drop-down list. In inputs.conf, enter the friendly name of one
of the AWS accounts that you configured on the Configuration page or the
name of the automatically discovered EC2 IAM role. |
|
Assume Role |
The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
|
Force using DLQ (Recommended) |
Check the checkbox to remove the checking of DLQ (Dead Letter
Queue) for ingestion of specific data. In inputs.conf, enter
|
|
AWS Region |
AWS region that the SQS queue is in. |
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (SQS) |
Private Endpoint (Interface VPC Endpoint) of your SQS service,
which can be configured from your AWS console. |
|
SNS Signature Validation |
SNS validation of your SQS messages, which can be configured from
your AWS console. If selected, all messages will be validated. If
unselected, then messages will not be validated until receiving a signed
message. Thereafter, all messages will be validated for an SNS
signature. For new SQS-based S3 inputs, this feature is enabled, by
default. |
|
Parse Firehose Error Data |
Parse raw data(All events) or failed Kinesis Firehose stream error data to the Splunk HTTP Event Collector (HEC). Decoding of error data will be done for failed Kinesis Firehose streams. For new SQS-based S3 inputs, this feature is disabled, by default. Versions 7.4.0 and higher of this add-on support the collection of data in the default uncompressed text format.
|
|
Private Endpoint (S3) |
Private Endpoint (Interface VPC Endpoint) of your S3 service,
which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
|
SQS Queue Name |
The SQS queue URL. |
|
SQS Batch Size |
The maximum number of messages to pull from the SQS queue in one batch. Enter an integer between 1 and 10 inclusive. Set a larger value for small files, and a smaller value for large files. The default SQS batch size is 10. If you are dealing with large files and your system memory is limited, set this to a smaller value. |
|
S3 File Decoder |
The decoder to use to parse the corresponding log files. The
decoder is set according to the Data Type you select.
If you select a Custom Data Type, choose one from
|
|
Use Metric Index? |
Whether to use metric index or event index. The default value is No (use event index). This field is only visible when creating VPC Flow Logs -> SQS based S3 inputs. |
|
Source Type |
The source type for the events to collect, automatically filled
in based on the decoder chosen for the input.
This add-on does not support custom sourcetypes for
|
|
Interval |
The length of time in seconds between two data collection runs. The default is 300 seconds. |
|
Index |
The index name where the Splunk platform puts the SQS-based S3 data. The default is main. |
|
Polling Interval |
The number of seconds to wait before the Splunk platform runs the command again. The default is 1,800 seconds. |
|
Parse all files as CSV |
If selected, all files will be parsed as a delimited file with the first line of each file considered the header. Set this checkbox to disabled for delimited files without a header. For new SQS-based S3 inputs, this feature is disabled, by default. Supported Formats:
|
|
CSV field delimiter |
Delimiter must be one character. The character cannot be alphanumeric, single quote, or double quote. By default, the delimiter is a comma.
|
Configure an SQS-based S3 input using configuration files¶
When you configure inputs manually in inputs.conf, create a stanza using
the following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_sqs_based_s3://<stanza_name>]
aws_account = <value>
using_dlq = <value>
private_endpoint_enabled = <value>
sqs_private_endpoint_url = <value>
s3_private_endpoint_url = <value>
sts_private_endpoint_url = <value>
parse_firehose_error_data = <value>
interval = <value>
s3_file_decoder = <value>
sourcetype = <value>
sqs_batch_size = <value>
sqs_queue_region = <value>
sqs_queue_url = <value>
metric_index_flag = <value>
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_sqs_based_s3]
using_dlq = 1
The previous values correspond to the default values in Splunk Web, as
well as some internal values that are not exposed in Splunk Web for
configuration. If you copy this stanza to your
$SPLUNK_HOME/etc/apps/Splunk_TA_aws//local
and use it as a starting
point to configure your inputs.conf manually, change the
[aws_sqs_based_s3]
stanza title from aws_sqs_based_s3
to
aws_sqs_based_s3://<name>
and add the additional parameters that you
need for your deployment.
Valid values for s3_file_decoder
are CustomLogs
, CloudTrail
,
ELBAccessLogs
, CloudFrontAccessLogs
, S3AccessLogs
, Config
,
DelimitedFilesDecoder
, TransitGatewayFlowLogs
.
If you want to ingest custom logs other than the natively supported AWS
log types, you must set s3_file_decoder = CustomLogs
. This setting
lets you ingest custom logs into the Splunk platform instance, but it
does not parse the data. To process custom logs into meaningful events,
you need to perform additional configurations in props.conf and
transforms.conf to parse the collected data to meet your specific
requirements.
For more information on these settings, see /README/inputs.conf.spec
under your add-on directory.
Configure an SQS based S3 input for CrowdStrike Falcon Data Replicator (FDR) events using Splunk Web¶
To configure an SQS based S3 input for CrowdStrike Falcon Data Replicator (FDR) events, perform the following steps:
- On the Inputs page, select “Create New Input” > “Custom Data Type” > “SQS-Based S3”.
- Select your AWS Account the account from the dropdown list.
- Uncheck the check box Force Using DLQ (Recommended).
- Select the region in which the SQS Queue is present from the AWS Region dropdown.
- In the SQS Queue Name box, enter the full SQS queue URL. This will create a option for the SQS queue URL in the dropdown menu.
- Select the newly created SQS queue URL option from the SQS Queue Name dropdown menu.
- Use the table in the Configure an SQS-based S3 input using Splunk Web section of this topic to add any additional configuration file arguments.
- Save your changes.
Migrate from the Generic S3 input to the SQS-based S3 input¶
SQS-based S3 is the recommended input type for real-time data collection from S3 buckets because it is scalable and provides better ingestion performance than the other S3 input types.
If you are already using a generic S3 input to collect data, use the following steps to switch to the SQS-based S3 input:
- Perform prerequisite configurations of AWS services:
- Set up an SQS queue with a Dead Letter Queue and proper visibility timeout configured. See Documentation:AddOns:AWS:SQS-basedS3.
- Set up the S3 bucket with the S3 key prefix, if specified, from which you are collecting data to send notifications to the SQS queue. See Configure alerts for the Splunk Add-on for AWS.
- Add an SQS-based S3 input using the SQS queue you just configured. After the setup, make sure the new input is enabled and starts collecting data from the bucket.
https://docs.splunk.com/Documentation/AddOns/released/AWS/Inspector
Automatically scale data collection with SQS-based S3 inputs¶
With the SQS-based S3 input type, you can take full advantage of the auto-scaling capability of the AWS infrastructure to scale out data collection by configuring multiple inputs to ingest logs from the same S3 bucket without creating duplicate events. This is particularly useful if you are ingesting logs from a very large S3 bucket and hit a bottleneck in your data collection inputs.
- Create an AWS auto scaling group for your heavy forwarder instances where the SQS-based S3 inputs is running. To create an auto-scaling group, you can either specify a launch configuration or create an AMI to provision new EC2 instances that host heavy forwarders, and use bootstrap script to install the Splunk Add-on for AWS and configure SQS-based S3 inputs. For detailed information about the auto-scaling group and how to create it, see http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html.
-
Set CloudWatch alarms for one of the following Amazon SQS metrics:
- ApproximateNumberOfMessagesVisible: The number of messages available for retrieval from the queue.
- ApproximateAgeOfOldestMessage: The approximate age (in seconds) of the oldest non-deleted message in the queue.
For instructions on setting CloudWatch alarms for Amazon SQS metrics, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQS_AlarmMetrics.html. 3. Use the CloudWatch alarm as a trigger to provision new heavy forwarder instances with SQS-based S3 inputs configured to consume messages from the same SQS queue to improve ingestion performance.
Configure miscellaneous inputs for the Splunk Add-on for AWS¶
You can configure miscellaneous Amazon Web Services (AWS) if they integrate with an input that the Splunk Add-on for AWS provides. For example, GuardDuty integrates with CloudWatch. You can index GuardDuty data through the Splunk Add-on for AWS CloudWatch input. Check to see if your intended AWS service integrates with CloudWatch, CloudWatch Logs, or Kinesis. See the following examples:
Configure Metadata inputs for the Splunk Add-on for AWS¶
The Description input was deprecated in version 6.2.0 of the Splunk Add-on for AWS. The Metadata input has been added as a replacement. To continue data collection for the Description input, move your workloads to the Metadata input.
Complete the steps to configure Metadata inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Metadata input.
- Configure AWS permissions for the Metadata input.
- Configure Metadata inputs either through Splunk Web or configuration files.
Configure Metadata permissions¶
The following listed APIs are only supported in the US East (N. Virginia) (us-east-1) region.
*wafv2_list_available_managed_rule_group_versions_cloudfront
*wafv2_list_logging_configurations_cloudfront
*wafv2_list_ip_sets_cloudfront
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["cloudfront:ListDistributions"],
"Resource": [
"*"
]
}
]
}
Amazon Elastic Compute Cloud (Amazon EC2)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeReservedInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeRegions",
"ec2:DescribeKeyPairs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVolumes",
"ec2:DescribeImages",
"ec2:DescribeAddresses",
"rds:DescribeDBInstances",
"rds:DescribeReservedDBInstances"
],
"Resource": [
"*"
]
}
]
}
Amazon Elastic Kubernetes Service (Amazon EKS)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:ListClusters",
"eks:DescribeCluster",
"eks:ListNodegroups",
"eks:DescribeNodegroup",
"eks:ListAddons",
"eks:DescribeAddon",
"eks:ListFargateProfiles",
"eks:ListIdentityProviderConfigs",
"eks:DescribeIdentityProviderConfig",
"eks:DescribeAddonVersions",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:ListTagsForResource",
"tag:GetResources"
],
"Resource": [
"*"
]
}
]
}
Amazon Elastic Load Balancer (ELB)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:DescribeListeners"
],
"Resource": [
"*"
]
}
]
}
Amazon EMR (previously called Amazon Elastic MapReduce)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticmapreduce:DescribeCluster",
"elasticmapreduce:DescribeReleaseLabel",
"elasticmapreduce:DescribeStep",
"elasticmapreduce:ListInstances",
"elasticmapreduce:ListInstanceFleets",
"elasticmapreduce:DescribeNotebookExecution",
"elasticmapreduce:DescribeStudio",
"elasticmapreduce:DescribeSecurityConfiguration",
"elasticmapreduce:ListClusters",
"elasticmapreduce:ListStudios",
"elasticmapreduce:ListSecurityConfigurations",
"elasticmapreduce:ListReleaseLabels",
"elasticmapreduce:ListNotebookExecutions",
"elasticmapreduce:ListSteps"
],
"Resource": [
"*"
]
}
]
}
Amazon ElastiCache¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticache:DescribeCacheClusters",
"elasticache:DescribeCacheEngineVersions",
"elasticache:DescribeCacheParameterGroups",
"elasticache:DescribeCacheParameters",
"elasticache:DescribeCacheSubnetGroups",
"elasticache:DescribeEngineDefaultParameters",
"elasticache:DescribeEvents",
"elasticache:DescribeGlobalReplicationGroups",
"elasticache:DescribeReplicationGroups",
"elasticache:DescribeReservedCacheNodesOfferings",
"elasticache:DescribeServiceUpdates",
"elasticache:DescribeSnapshots",
"elasticache:DescribeUpdateActions",
"elasticache:DescribeUserGroups",
"elasticache:DescribeUsers",
"elasticache:DescribeReservedCacheNodes",
"elasticache:ListTagsForResource",
"tag:GetResources"
],
"Resource": [
"*"
]
}
]
}
Amazon API Gateway¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpnGateways",
"ec2:DescribeInternetGateways",
"ec2:DescribeCustomerGateways",
"ec2:DescribeNatGateways",
"ec2:DescribeLocalGateways",
"ec2:DescribeCarrierGateways",
"ec2:DescribeTransitGateways"
],
"Resource": [
"*"
]
}
]
}
Amazon GuardDuty¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"guardduty:ListDetectors",
"guardduty:DescribePublishingDestination",
"tag:GetResources",
"guardduty:ListPublishingDestinations"
],
"Resource": [
"*"
]
}
]
}
AWS Identity and Access Management (IAM)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:ListServerCertificates",
"iam:ListRolePolicies",
"iam:ListMFADevices",
"iam:ListSigningCertificates",
"iam:ListSSHPublicKeys",
"iam:GetUser",
"iam:ListUsers",
"iam:GetAccountPasswordPolicy",
"iam:ListAccessKeys",
"iam:GetAccessKeyLastUsed",
"iam:ListPolicies",
"iam:GetPolicyVersion",
"iam:ListUserPolicies",
"iam:ListAttachedUserPolicies",
"iam:ListRoles",
"iam:GetAccountAuthorizationDetails"
],
"Resource": [
"*"
]
}
]
}
Amazon Kinesis Data Firehose¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kinesis:ListStreams",
"kinesis:ListShards",
"kinesis:ListStreams",
"kinesis:ListStreamConsumers",
"kinesis:DescribeStreamConsumer",
"kinesis:DescribeLimits",
"firehose:ListDeliveryStreams",
"firehose:DescribeDeliveryStream",
"kinesis:DescribeStreamSummary",
"tag:GetResources"
],
"Resource": [
"*"
]
}
]
}
AWS Lambda¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:ListFunctions"
],
"Resource": [
"*"
]
}
]
}
AWS Network Firewall¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"network-firewall:ListFirewalls",
"network-firewall:DescribeFirewall",
"network-firewall:DescribeLoggingConfiguration",
"network-firewall:ListFirewallPolicies",
"network-firewall:DescribeFirewallPolicy",
"network-firewall:ListRuleGroups",
"network-firewall:DescribeRuleGroup",
"network-firewall:ListTagsForResource",
"network-firewall:DescribeResourcePolicy",
"logs:ListLogDeliveries",
"logs:GetLogDelivery",
"tag:GetResources"
],
"Resource": [
"*"
]
}
]
}
Amazon Route 53¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHealthChecks",
"route53:ListHostedZones",
"route53:ListHostedZonesByVPC",
"route53:ListReusableDelegationSets",
"route53:ListQueryLoggingConfigs",
"route53:ListTrafficPolicies",
"route53:ListTrafficPolicyVersions",
"route53:ListTrafficPolicyInstances",
"route53:ListResourceRecordSets",
"route53:ListTagsForResource",
"tag:GetResources",
"ec2:DescribeRegions",
"ec2:DescribeVpcs"
],
"Resource": [
"*"
]
}
]
}
AWS WAF¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"waf:ListRules",
"waf:ListRuleGroups",
"waf:ListGeoMatchSets",
"waf:ListByteMatchSets",
"waf:ListActivatedRulesInRuleGroup",
"waf:ListRegexMatchSets",
"waf:ListRegexPatternSets",
"waf:ListIPSets",
"waf:ListRateBasedRules",
"waf:ListLoggingConfigurations",
"waf:ListWebACLs",
"waf:ListSizeConstraintSets",
"waf:ListXssMatchSets",
"waf:ListSqlInjectionMatchSets",
"waf:ListTagsForResource",
"tag:GetResources"
],
"Resource": [
"*"
]
}
]
}
AWS WAFv2¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"wafv2:ListAvailableManagedRuleGroupVersions",
"wafv2:ListLoggingConfigurations",
"wafv2:ListIPSets",
"wafv2:ListTagsForResource",
"tag:GetResources",
"wafv2:ListAvailableManagedRuleGroups",
],
"Resource": [
"*"
]
}
]
}
Amazon S3¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetAccelerateConfiguration",
"s3:GetBucketCORS",
"s3:GetLifecycleConfiguration",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketTagging"
],
"Resource": [
"*"
]
}
]
}
Amazon Virtual Private Cloud (Amazon VPC)¶
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkAcls",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": [
"*"
]
}
]
}
Configure a Metadata input using Splunk Web¶
To configure inputs in Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Metadata.
- Fill out the fields as described in the following table:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your Metadata data. In Splunk Web, select an account from the drop-down list. In aws_metadata_tasks.conf , enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
regions |
AWS Regions | The AWS regions for which you are collecting Metadata data. In Splunk Web, select one or more regions from the drop-down list. In aws_metadata_tasks.conf, enter one or more valid AWS region IDs, separated by commas. See AWS service endpoints. |
apis |
APIs/Interval (seconds) | APIs you want to collect data from, and intervals for each API, in the format of <api name>/<api interval in seconds>,<api name>/<api interval in seconds> . The default value in Splunk Web is ec2_volumes/3600,ec2_instances/3600,ec2_reserved_instances/3600,ebs_snapshots/3600,elastic_load_balancers/3600,vpcs/3600,vpc_network_acls/3600,cloudfront_distributions/3600,vpc_subnets/3600,rds_instances/3600,ec2_key_pairs/3600,ec2_security_groups/3600 . This value collects from all of the APIs supported in this release. Set your intervals to 3,600 seconds (1 hour) or longer to avoid rate limiting errors. |
aws_iam_role |
Assume Role | The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS. |
sourcetype |
Source type | A source type for the events. Enter aws:metadata . |
index |
Index | The index name where the Splunk platform puts the Metadata data. The default is main. |
retry_max_attempts |
Retry Max Attempts | Specify the maximum number of retry attempts, if there is an error in the response of a request. |
Configure a Metadata input using configuration files¶
To configure a Metadata input using configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_metadata_tasks.conf
using the following template:
[<name>]
account = <value>
regions = <values split by commas>
apis = <value>
aws_iam_role = <value>
sourcetype = <value>
index = <value>
retry_max_attempts = <value>
Here is an example stanza that collects metadata data from all supported APIs:
[desc:splunkapp2]
account = splunkapp2
regions = us-west-2
apis = ec2_volumes/3600, ec2_instances/3600, ec2_reserved_instances/3600, ebs_snapshots/3600, classic_load_balancers/3600, application_load_balancers/3600, vpcs/3600, vpc_network_acls/3600, cloudfront_distributions/3600, vpc_subnets/3600, rds_instances/3600, ec2_key_pairs/3600, ec2_security_groups/3600, ec2_images/3600, ec2_addresses/3600, lambda_functions/3600, s3_buckets/3600, iam_users/3600, iam_list_policies/3600
aws_iam_role = iam_users
sourcetype = aws:metadata
index = default
retry_max_attempts = 5
Configure Inspector v2 inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Inspector v2 inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See the Manage accounts for the Splunk Add-on for AWS topic in this manual.
- Configure AWS services for the Inspector v2 input.
- Configure AWS permissions for the Inspector v2 input.
- Configure Inspector v2 inputs either through Splunk Web or configuration files.
Configure Amazon Inspector v2 permissions¶
You need these required permissions for Inspector v2:
Describe*
List*
See the following sample inline policy to configure Inspector v2 input permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"inspector2:Describe*",
"inspector2:List*"
],
"Resource": "*"
}
]
}
For more information, see http://docs.aws.amazon.com/IAM/latest/UserGuide/list_inspector.html.
Configure an Inspector v2 input using Splunk Web¶
To configure inputs using Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > Inspector > Inspector (v2).
- Use the following table to complete the fields for the new input in Splunk Web or in the .conf file:
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
account |
AWS Account | The AWS account or EC2 IAM role the Splunk platform uses to access your Inspector findings. In Splunk Web, select an account from the drop-down list. In aws_inspector_v2_tasks.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
regions |
AWS Region | The AWS region that contains the data. In aws_inspector_v2_tasks.conf, enter region IDs in a comma-separated list. |
sourcetype |
Source type | A source type for the events. Enter a value only if you want to override the default of aws:inspector:v2:findings . Event extraction relies on the default value of source type. If you change the default value, you must update props.conf as well. |
index |
Index | The index name where the Splunk platform puts the Inspector findings. The default is main. |
polling_interval |
Pooling interval | The number of seconds to wait before the Splunk platform runs the command again. The default is 300 seconds. |
Configure an Inspector v2 input using configuration files¶
To configure the input using the configuration files, create
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_inspector_v2_tasks.conf
using the following template:
[<name>]
account = <value>
region = <value>
index = <value>
polling_interval = <value>
sourcetype = <value>
The following is an example stanza that collects Inspector v2 findings:
[splunkapp2:us-west-2]
account = splunkapp2
region = us-west-2
index = default
polling_interval = 300
sourcetype = aws:inspector:v2:findings
Configure VPC Flow Logs inputs for the Splunk Add-on for AWS¶
Complete the steps to configure VPC Flow Log inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- See Configure Kinesis inputs for the Splunk Add-on for AWS if ingesting VPC flow logs via Kinesis Data Stream
- See Configure CloudWatch Log inputs for the Splunk Add-on for AWS if ingesting VPC flow logs via Cloudwatch logs
- See Configure SQS-based S3 inputs for the Splunk Add-on for AWS if ingesting VPC flow logs via SQS-based S3.
The Splunk Add-on for AWS supports VPC flow logs in the following log formats. Fields must be in one of the following orders to provide field extractions.
For more information on the list of v1-v5 fields to add in the given order when selecting Custom Format, or selecting Custom Format and Select All, see the Available fields section of the Logging IP traffic using VPC Flow Logs topic in the AWS documentation.
Logs will be indexed under the sourcetype: aws:cloudwatchlogs:vpcflow
or aws:cloudwatchlogs:vpcflow:metric
.
For more information, see Source types for the Splunk Add-on for AWS.
Log format | Ordered list of fields |
---|---|
Default | version , account-id , interface-id , srcaddr , dstaddr , srcport , dstport , protocol , packets , bytes , start , end , action , log-status , |
Custom | version , account-id , interface-id , srcaddr , dstaddr , srcport , dstport , protocol , packets , bytes , start , end , action , log-status , vpc-id , subnet-id , instance-id , tcp-flags , type , pkt-srcaddr , pkt-dstaddr , region , az-id , sublocation-type , sublocation-id , pkt-src-aws-service , pkt-dst-aws-service , flow-direction , traffic-path |
Select All | account-id , action , az-id , bytes , dstaddr , dstport , end , flow-direction , instance-id , interface-id , log-status , packets , pkt-dst-aws-service , pkt-dstaddr , pkt-src-aws-service , pkt-srcaddr , protocol , region , srcaddr , srcport , start , sublocation-id , sublocation-type , subnet-id , tcp-flags , traffic-path , type , version , vpc-id |
Update log formatting in AWS VPC¶
To update the log format in your AWS VPC to ensure successful field extractions, perform the following steps:
- Navigate to the AWS VPC dashboard and select Virtual private cloud > Your VPCs.
- Add Name, choose Filter, Minimum aggregation interval, Destination and corresponding fields.
- For Log record format, select one of the following options:
- Select Default (Not supported in versions 6.3.0. Supported in versions 6.3.1 and later).
- Select Custom, and add fields in the order provided in the field table previously listed in this topic.
- Select Select All.
- Delete the previous VPC flow log with the old log formatting. 5.
For more information on updating the log format in AWS VPC, see the Create a flow log section of the Work with flow logs topic in the AWS documentation.
Configure Security Lake inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Security Lake inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the Security Lake input.
- Configure AWS permissions for the Security Lake input.
- Configure AWS services for the Amazon Security Lake input
- Configure Security Lake inputs either through Splunk Web or configuration files.
The Safari web browser is not supported for configuring an Amazon Security Lake input using Splunk Web at this time. Use Google Chrome or Firefox for your configurations instead.
Configure AWS services for the Amazon Security Lake input¶
After completing all the required configuration prerequisites, configure a subscriber with data access in your Amazon Security Lake service. This creates the resources needed to make the Amazon Security Lake events available to be consumed into your Splunk platform deployment.
Ensure all AWS prerequisites for setting up Subscriber data access are met. For more information, see the Managing data access for Security Lake subscribers topic in the Amazon Security Lake documentation.
Set up a subscriber¶
Perform the following steps to set up a subscriber for the Splunk Add-on for AWS.
- Log into the AWS console.
- Navigate to the Security Lake service Summary page.
- In the navigation pane on the left side, choose Subscribers.
- Click Create subscriber.
- On the Create subscriber page, fill out the details that apply
to your deployment.
- Add a name for your subscriber.
- Add an optional description for your subscriber.
- For Log and event sources, select the event sources that you want to subscribe to for your data collection. Sources that are not selected will not be collected into your Splunk platform deployment.
- Select S3 as your data collection method.
- Enter your Account ID from where you want to collect events.
- Enter a placeholder value for External ID. External ID is
not supported, but the field must be populated when creating a
subscriber. For example, enter
placeholder-value-splunk
. - For Notification details, select SQS queue.
- Click the Create button.
- On the subscribers Details page, confirm that the subscriber has been created with the appropriate parameters.
Verify information in SQS Queue¶
Perform the following steps in your Amazon deployment to verify the information in the SQS Queue that Security Lake creates.
- In your AWS console, navigate to the Amazon SQS service.
- In the Queues section, navigate to the SQS Queue that Security Lake created, and click on the name.
- On the information page for the SQS Queue that Security Lake
created, perform the following validation steps.
- Click on the Monitoring tab to verify that events are flowing into the SQS Queue.
- Click on the Dead-letter queue tab to verify that a dead-letter queue (DLQ) has been created. If a DLQ has not been created, see the Configuring a dead-letter queue (console) topic in the AWS documentation.
Verify events are flowing into S3 bucket¶
Perform the following steps in your Amazon deployment to verify that parquet events are flowing into your configured S3 buckets.
- In your AWS console, navigate to the Amazon S3 service.
- Navigate to the Buckets section, and click on the S3 bucket that Security Lake created for each applicable region.
- In each applicable bucket, navigate to the Objects tab, and click through the directories to verify that Security Lake has available events flowing into the S3 bucket. If Security Lake is enabled on more than one AWS account, check to see if each applicable account number is listed, and that parquet files exist inside each account.
- In each applicable S3 bucket, navigate to the Properties tab.
- Navigate to Event notifications, and verify that the Security Lake SQS Queue that was created has event notifications turned on, and the data destination is the Security Lake SQS queue.
Configure IAM policies¶
After you set up and configured a subscriber in the Amazon Security Lake service, perform the following modifications to your IAM policies to make the Amazon Security Lake service work:
- Update a user to assume a role. Then modify the assumed role so that it doesn’t reference an External ID.
- Update your boundary policy to work with the Splunk Add-on for AWS.
Update a user to assume a role¶
Modify your Security Lake subscriber role to associate an existing user with a role, and modify the assumed role so that it doesn’t reference an External ID. You must get access to the subscription role notification that was created as part of the Amazon Security Lake subscriber provisioning.
- In your AWS console, navigate to the Amazon IAM service.
- In your Amazon IAM service, navigate to the Roles page.
- On the Roles page, select the Role name of the subscription role notification that was created as part of the Security Lake subscriber provisioning process.
- On the Summary page, navigate to the Trust relationships tab.
- Modify the Trusted entity policy with the following updates:
- Remove any reference to the External ID that was created during the Security Lake subscriber provisioning process.
-
On the stanza containing the ARN, Attach the username from your desired user account to the end of the ARN. For example,
"arn:aws:iam:772039352793:user/jdoe"
, wherejdoe
is the user name. For more information, see the following example Trust entity: { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “1”, “Effect”: “Allow”, “Principal”: { “AWS”: “arn:aws:iam::772039352793:user/jdoe” }, “Action”: “sts:AssumeRole” } ] }This step connects a user to the role that was created, and lets a user take their secret key access key to then configure the Security Lake service. 6. In your Amazon IAM service, navigate to the Users page. 7. On the Users page, select the User name of the user who has been connected to the role that was created. 8. On the Summary page, navigate to the Access keys section, and copy the user’s Access key ID. If no access keys currently exist, first click the Create access key button.
Update your boundary policy to work with the Splunk Add-on for AWS¶
- In your Amazon IAM service, navigate to the Roles page.
- On the Roles page, select the Role name of the subscription role notification that was created as part of the Security Lake subscriber provisioning process.
- On the Summary page, navigate to the Permissions policies tab, and click on the Policy name for your Amazon Security Lake subscription role, in order to modify the role policy.
- On the Edit policy page, click on the JSON tab.
- Navigate to the Resource column of the role policy.
- Under the existing S3 resources stanzas, add a stanza containing the Amazon Resource Name (ARN) of the SQS Queue that was created during the Security Lake service subscriber provisioning process.
- Navigate to the Action column of the role policy.
-
Review the contents of the Action column, and add the following stanzas, if they do not already exist: “sqs:GetQueueUrl”, “sqs:ReceiveMessage”, “sqs:SendMessage”, “sqs:DeleteMessage”, “sqs:GetQueueAttributes”, “sqs:ListQueues”, “sqs:ChangeMessageVisibility”,
For more information, see the following example:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion", "sqs:GetQueueUrl", "sqs:ReceiveMessage", "sqs:SendMessage", "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:ListQueues", "s3:ListBucket", "s3:ListBucketVersions", "sqs:ChangeMessageVisibility", "kms:Decrypt" ], "Resource": [ "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/CLOUD_TRAIL/*", "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/VPC_FLOW/*", "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/ROUTE53/*", "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e", "arn:aws:sqs:us-east-2:772039352793:moose-public-sqs" ] } ] }
- Save your changes.
Configure Amazon Security Lake inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Amazon Security Lake inputs for the Splunk Add-on for AWS:
- Configure AWS accounts for the Amazon Security Lake input.
- Configure Amazon Security Lake inputs either through Splunk Web or configuration files.
Configuration prerequisites¶
This data input supports the following compression types:
- Apache Parquet file format.
Configure AWS accounts for the Amazon Security Lake input¶
Add your AWS account to the Splunk Add-on for AWS
- On the Splunk Web home page, click on Splunk Add-on for AWS in the navigation bar.
- Navigate to the Configuration page,
- On the Configuration page, navigate to the Account tab.
- Click the Add button.
- On the Add Account page, add a Name, the Key ID of the user who was given Security Lake configuration privileges, Secret Key, and Region Category.
- Click the Add button.
- Navigate to the IAM Role tab.
- Click the Add button.
- Add the ARN role that was created during the Security Lake service provisioning process.
- Click the Add button.
Configure an Amazon Security Lake input using Splunk Web¶
To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:
- Create New Input > Security Lake > SQS-Based S3
You must have the admin_all_objects role enabled in order to add new inputs.
Choose the menu path that corresponds to the data type you want to collect. The system automatically sets the source type and display relevant field settings in the subsequent configuration page.
Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
---|---|---|
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to
access the keys in your S3 buckets. In Splunk Web, select an account
from the drop-down list. In inputs.conf, enter the friendly name of one
of the AWS accounts that you configured on the Configuration page or the
name of the automatically discovered EC2 IAM role. |
|
Assume Role |
The IAM role to assume. |
|
Force using DLQ (Recommended) |
Check the checkbox to remove the checking of DLQ (Dead Letter
Queue) for ingestion of specific data. In inputs.conf, enter
|
|
AWS Region |
AWS region that the SQS queue is in. |
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS Simple Cloud Storage (S3) services for
authentication and data collection. In inputs.conf, enter |
|
Private Endpoint (SQS) |
Private Endpoint (Interface VPC Endpoint) of your SQS service,
which can be configured from your AWS console. |
|
SNS Signature Validation |
SNS validation of your SQS messages, which can be configured from
your AWS console. If selected, all messages will be validated. If
unselected, then messages will not be validated until receiving a signed
message. Thereafter, all messages will be validated for an SNS
signature. For new SQS-based S3 inputs, this feature is enabled, by
default. |
|
Private Endpoint (S3) |
Private Endpoint (Interface VPC Endpoint) of your S3 service,
which can be configured from your AWS console. |
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
|
SQS Queue Name |
The SQS queue URL. |
|
SQS Batch Size |
The maximum number of messages to pull from the SQS queue in one batch. Enter an integer between 1 and 10 inclusive. Set a larger value for small files, and a smaller value for large files. The default SQS batch size is 10. If you are dealing with large files and your system memory is limited, set this to a smaller value. |
|
S3 File Decoder |
The decoder to use to parse the corresponding log files. The
decoder is set according to the Data Type you select.
If you select a Custom Data Type, choose one from
|
|
Source Type |
The source type for the events to collect, automatically filled in based on the decoder chosen for the input. |
|
Interval |
The length of time in seconds between two data collection runs. The default is 300 seconds. |
|
Index |
The index name where the Splunk platform puts the Amazon Security Lake data. The default is main. |
Configure an Amazon Security Lake input using configuration files¶
When you configure inputs manually in inputs.conf, create a stanza using
the following template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
[aws_sqs_based_s3://test_input]
aws_account = test-account
interval = 300
private_endpoint_enabled = 0
s3_file_decoder = AmazonSecurityLake
sourcetype = aws:asl
sqs_batch_size = 10
sqs_queue_region = us-west-1
sqs_queue_url = https://sqs.us-west-1.amazonaws.com/<account-id>/parquet-test-queue
sqs_sns_validation = 0
using_dlq = 1
Some of these settings have default values that can be found in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf
:
[aws_sqs_based_s3]
using_dlq = 1
The previous values correspond to the default values in Splunk Web, as
well as some internal values that are not exposed in Splunk Web for
configuration. If you copy this stanza to your
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local
and use it as a starting
point to configure your inputs.conf manually, change the
[aws_sqs_based_s3]
stanza title from aws_sqs_based_s3
to
aws_sqs_based_s3://<name>
and add the additional parameters that you
need for your deployment.
Valid values for s3_file_decoder
are CustomLogs
, CloudTrail
,
ELBAccessLogs
, CloudFrontAccessLogs
, S3AccessLogs
, Config
.
If you want to ingest custom logs other than the natively supported AWS
log types, you must set s3_file_decoder = CustomLogs
. This setting
lets you ingest custom logs into the Splunk platform instance, but it
does not parse the data. To process custom logs into meaningful events,
you need to perform additional configurations in props.conf and
transforms.conf to parse the collected data to meet your specific
requirements.
For more information on these settings, see /README/inputs.conf.spec
under your add-on directory.
Automatically scale data collection with Amazon Security Lake inputs¶
With the Amazon Security Lake input type, you can take full advantage of the auto-scaling capability of the AWS infrastructure to scale out data collection by configuring multiple inputs to ingest logs from the same S3 bucket without creating duplicate events. This is particularly useful if you are ingesting logs from a very large S3 bucket and hit a bottleneck in your data collection inputs.
- Create an AWS auto scaling group for your heavy forwarder instances where the SQS-based S3 inputs is running. To create an auto-scaling group, you can either specify a launch configuration or create an AMI to provision new EC2 instances that host heavy forwarders, and use bootstrap script to install the Splunk Add-on for AWS and configure SQS-based S3 Amazon Security Lake inputs. For detailed information about the auto-scaling group and how to create it, see http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html.
-
Set CloudWatch alarms for one of the following Amazon SQS metrics:
- ApproximateNumberOfMessagesVisible: The number of messages available for retrieval from the queue.
- ApproximateAgeOfOldestMessage: The approximate age (in seconds) of the oldest non-deleted message in the queue.
For instructions on setting CloudWatch alarms for Amazon SQS metrics, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQS_AlarmMetrics.html. 3. Use the CloudWatch alarm as a trigger to provision new heavy forwarder instances with SQS-based S3 inputs configured to consume messages from the same SQS queue to improve ingestion performance.
Configure CloudTrail Lake inputs for the Splunk Add-on for AWS¶
Complete the steps to configure CloudTrail Lake inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for the CloudTrail lake input.
- Configure AWS permissions for the CloudTrail lake input.
- (Optional) Configure VPC Interface Endpoints for STS and cloudtrail services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
- Configure CloudTrail lake inputs either through Splunk Web or configuration files.
Configure AWS services for the CloudTrail lake input¶
The Splunk Add-on for AWS collects JSON events from an cloudtrail lake event data store using SQL based query
To collect the data using cloudtrail lake input an event data store is required to be configured on AWS. There are various types of event data stores which can be created.
- To create an event data store, see the following topics in the AWS
documentation:
- Create an event data store for CloudTrail events topic in the AWS CloudTrail User Guide
- Create an event data store for CloudTrail Insights events topic in the AWS CloudTrail User Guide
- Create an event data store for events outside of AWS topic in the AWS CloudTrail User Guide
- To stop or start ingestion for event data stores see the Stop and start event ingestion topic in the AWS CloudTrail User Guide.
Configure AWS permissions for the CloudTrail lake input¶
Required permissions for the cloudtrail to collect the data from an event data store using CloudTrail lake modular input
GetQueryResults
StartQuery
ListEventDataStores
DescribeQuery
See the following sample inline policy to collect the data from CloudTrail Lake event data store using CloudTrail lake modular input
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudtrail:GetQueryResults",
"Resource": "arn:aws:cloudtrail:*:<account>:eventdatastore/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "cloudtrail:StartQuery",
"Resource": "arn:aws:cloudtrail:*:<account>:eventdatastore/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "cloudtrail:ListEventDataStores",
"Resource": "*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "cloudtrail:DescribeQuery",
"Resource": "arn:aws:cloudtrail:*:<account>:eventdatastore/*"
}
]
}
Configure a CloudTrail Lake input using Splunk Web¶
To configure inputs in Splunk Web:
- Click on Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Click Create New Input > CloudTrail Lake.
- Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:
Argument in configuration file |
Field in Splunk Web |
Description |
||
---|---|---|---|---|
AWS input configuration |
||||
|
AWS Account |
The AWS account or EC2 IAM role the Splunk platform uses to access data present in CloudTrail lake event data store. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role. |
||
|
Assume Role |
The IAM role to assume, see "Add and manage IAM roles" in the Manage accounts for the Splunk Add-on for AWS topic. |
||
|
AWS Region |
The region in which CloudTrail lake event data store is present. |
||
|
Use Private Endpoints |
Check the checkbox to use private endpoints of AWS Security Token
Service (STS) and AWS CloudTrail services for authentication and data
collection. In inputs.conf, enter |
||
|
Private Endpoint (CloudTrail) |
Private Endpoint (Interface VPC Endpoint) of your CloudTrail
service, which can be configured from your AWS console. |
||
|
Private Endpoint (STS) |
Private Endpoint (Interface VPC Endpoint) of your STS service,
which can be configured from your AWS console. |
||
|
Input Mode |
Two types of input modes Index Once and Continuously Monitor. Index Once input mode only ingests the data once.Continuously Monitor input mode collects the data at every interval. |
||
|
Event Data Store |
The cloudtrail lake event data store from which the data will be collected. |
||
|
Start Date/Time |
Select a Start date/time to specify how far back to go when initially collecting data. If no date/time is given, the input will start 7 days in the past. |
||
|
End date/time |
This is only required in case of Index Once input mode. |
||
Splunk-related configuration |
||||
|
Sourcetype |
A source type for the events. Specify a value if you want to
override the default of |
||
|
Index |
The index name where the Splunk platform puts the event data store data. The default is main. |
||
Advanced settings |
||||
|
Query Window Size (minutes) |
This parameter is used to control the chunk size. For example, if the calculated start date is 2022-01-01T00:00:00 (midnight on January 1, 2022), the end date for the SQL query will be 2022-01-01T00:01:00 (one hour after midnight) if the query window size is 60 minutes. |
||
|
Delay Throttle (minutes) |
CloudTrail typically delivers events within an average of about 5 minutes of an API call. This time is not guaranteed. This parameter specifies how close to "now" the end date for a query may be (where "now" is the time that the input runs). For Continuously Monitor input mode at every interval invocation the input will collect the data from checkpointed start_date_time till current UTC time - delay_throttle |
||
|
Interval (in seconds) |
Data collection interval. The value is only applicable for Continuously Monitor input mode. For Index Once input mode this value would always be -1. |
Configure a Cloudtrail Lake input using a configuration file¶
To configure inputs in inputs.conf, create a stanza using the following
template and add it to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf
. If the file or
path does not exist, create it.
Below is the example of input stanza for Index Once input mode
[aws_cloudtrail_lake://<name>]
aws_account = <value>
aws_region = <value>
end_date_time = 2023-12-20T10:07:25
event_data_store = <value>
index = <value>
input_mode = index_once
interval = -1
private_endpoint_enabled = 0
query_window_size = <value>
sourcetype = <value>
start_date_time = 2023-12-15T10:07:25
Below is the example of input stanza for Continuously Monitor input mode
[aws_cloudtrail_lake://<name>]
aws_account = <value>
aws_region = <value>
delay_throttle = <value>
event_data_store = <value>
index = <value>
input_mode = continuously_monitor
interval = 3600
private_endpoint_enabled = 0
query_window_size = <value>
sourcetype = <value>
start_date_time = 2023-12-15T10:07:25
Configure Transit Gateway Flow Logs inputs for the Splunk Add-on for AWS¶
Complete the steps to configure Transit Gateway Flow Log inputs for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts manual.
- See SQS-based S3 inputs manual for ingesting Transit gateway flow logs via SQS-based S3.
The Splunk Add-on for AWS supports Transit gateway flow logs in the following log formats. Fields must be in following order to provide field extractions.
The default log record format in text file format of Transit Gateway Flow Logs is supported. For more information regarding the fields, see the Available fields section of the Logging network traffic using Transit Gateway Flow Logs topic in the AWS documentation.
Logs will be indexed under the sourcetype: aws:transitgateway:flowlogs
.
For more information, see Source types manual.
Log format | Ordered list of fields |
---|---|
Default | version , resource-type , account-id , tgw-id , tgw-attachment-id , tgw-src-vpc-account-id , tgw-dst-vpc-account-id , tgw-src-vpc-id , tgw-dst-vpc-id , tgw-src-subnet-id , tgw-dst-subnet-id , tgw-src-eni , tgw-dst-eni , tgw-src-az-id , tgw-dst-az-id , tgw-pair-attachment-id , srcaddr , dstaddr , srcport , dstport , protocol , packets , bytes , start , end , log-status , type , packets-lost-no-route , packets-lost-blackhole , packets-lost-mtu-exceeded , packets-lost-ttl-expired , tcp-flags , region , flow-direction , pkt-src-aws-service , pkt-dst-aws-service |
For more information regarding Transit Gateway Flow Logs, see the Create a flow log section of the Work with Transit Gateway Flow Logs in the AWS documentation.
Configure Global setting for CloudWatch inputs in the Splunk Add-on for AWS¶
Configure Global settings for CloudWatch inputs using Splunk Web¶
To configure global settings in Splunk Web:
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web home.
- Navigate to Configuration page.
- Click on Add-on Global Settings.
- Use the following table to complete the CloudWatch Max Threads configuration.
Argument in configuration file | Field in Splunk Web | Description |
---|---|---|
cloudwatch_dimensions_max_threads |
CloudWatch Max Threads | Specify the number of thread workers that will be working simultaneously to retrieve data for CloudWatch input. The default value is set to 1. The value of CloudWatch Max Threads should be between 1 to 64. Tune the thread count according to the system specifications. Updating the thread count can lead to increased CPU and Memory utilization. |
Configuring CloudWatch Max Threads
to higher value, might affect the system performance.
Configure Global settings for CloudWatch inputs using configuration files¶
To configure global seetings manually, create a stanza using the following template and add it to $SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_global_settings.conf
. If the file or path does not exist, create it.
[aws_inputs_settings]
cloudwatch_dimensions_max_threads = <value>
Ended: Pull-based (API) input configurations
Push-based (Amazon Kinesis Firehose) input configurations ↵
Configuration overview for the Amazon Kinesis Firehose¶
The way that you install and configure your environment to use the Amazon Kinesis Firehose depends on your deployment of the Splunk platform. Follow the instructions that match your Splunk platform deployment.
Steps to configure the Amazon Kinesis Firehose on a single-instance Splunk Enterprise deployment¶
Follow these steps to use the Splunk Add-on for Amazon Web Services on single-instance Splunk Enterprise. If you are not on a single-instance Splunk Enterprise deployment, see Configure Amazon Kinesis Firehose on a paid Splunk Cloud deployment to find the instructions that match your Splunk platform deployment type.
Steps to configure the Amazon Kinesis Firehose on a paid Splunk Cloud deployment¶
Follow these steps to configure the Amazon Kinesis Firehose in your paid Splunk Cloud deployment.
If your paid Splunk Cloud deployment has a search head cluster, you will need additional assistance from Splunk Support to perform this configuration. See the Paid Splunk Cloud with a search head cluster section of this topic.
If your paid Splunk Cloud instance does not have a search head cluster, follow this procedure.
- Decide what index you want to use to collect your Amazon Kinesis Firehose data. Ensure that this index is enabled and active. Sending data to a disabled or deleted index results in dropped events. If you need to create a new index, see Manage Splunk Cloud Platform indexes.
- Install the add-on to your Splunk Cloud deployment. For Splunk Cloud Classic stacks, submit a case on the Splunk Support Portal. In the case, ask Splunk Support to enable HTTP Event Collector (HEC) and create or modify an elastic load balancer to use with this add-on. For Splunk Cloud Victoria stacks, a Firehose HEC elastic load balancer is automatically provisioned. For more information on step-by-step instructions, see Install apps in your Splunk Cloud deployment.
- Wait for Splunk Support to perform the necessary setup and confirm
with you once the HTTP event collector is enabled and your elastic
load balancer is ready for use. Splunk Support will confirm the URL
that you should use for your HTTP event collector endpoint. It
should match this format:
https://http-inputs-firehose-<your unique cloud hostname here>.splunkcloud.com:443
. - Create an HTTP event collector token with indexer acknowledgments
enabled. For step-by-step instructions, see Configure HTTP Event
Collector on Splunk
Cloud.
During the configuration:
- Specify a Source type for your incoming data. See Source types supported for Amazon Kinesis Firehose by this add-on.
- Select the Index to which Amazon Kinesis Firehose will send data.
- Check the box next to Enable indexer acknowledgement.
- Save the token that Splunk Web provides. You need this token when you configure Amazon Kinesis Firehose.
- Repeat steps 4, 5, and 6 for each source type from which you want to collect data. Each source type requires a unique HTTP event collector token.
Next step Configure Amazon Kinesis Firehose to send data to the Splunk platform
Paid Splunk Cloud with a search head cluster¶
If your paid Splunk Cloud deployment has a search head cluster, follow this procedure.
- Decide what index you want to use to collect your Amazon Kinesis Firehose data. Ensure that this index is enabled and active. Sending data to a disabled or deleted index results in dropped events. If you need to create a new index, see Manage Splunk Cloud Platform indexes.
-
Submit a case on the Splunk Support Portal. In the case, ask Splunk Support to :
- Install the Splunk Add-on for AWS on your Splunk Cloud deployment.
- For Splunk Cloud Classic stacks, enable HTTP event collector and create or modify an elastic load balancer for use with this add-on.
- For Splunk Cloud Classic stacks, create an HTTP event collector
token with indexer acknowledgement enabled for each source type
from which you plan to collect data from Amazon Kinesis
Firehose. For each of the tokens you request, ask Splunk Support
to specify the following parameters:
- The Source type for your incoming data. See Source types supported for Amazon Kinesis Firehose by this add-on for the source types supported by this add-on.
- The Index to which Amazon Kinesis Firehose will send data.
- Enable indexer acknowledgement should be true.
-
Wait for Splunk Support to perform the necessary setup and provide you with the following information:
- The full URL that you should use for your HTTP event collector
endpoint. It should match this format:
https://http-inputs-firehose-<your unique cloud hostname here>.splunkcloud.com:443
. - A token for each source type that you requested.
- The full URL that you should use for your HTTP event collector
endpoint. It should match this format:
- Save the tokens and the URL that Splunk Support provides. You need this information when you configure Amazon Kinesis Firehose.
Next step Configure Amazon Kinesis Firehose to send data to the Splunk platform
Steps to configure the Amazon Kinesis Firehose on a distributed Splunk Enterprise deployment¶
Follow these steps to use the Amazon Kinesis Firehose on a distributed deployment of Splunk Enterprise. If you are not on a distributed Splunk Enterprise deployment, see Configuration for the Amazon Kinesis Firehose to find the instructions that match your Splunk platform deployment type.
- Select and prepare your distributed Splunk Enterprise deployment for the Splunk Add-on for Amazon Web Services.
- If your indexers are in an AWS virtual private cloud, Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services.
- Install the Splunk Add-on for Amazon Web Services on a distributed deployment of Splunk Enterprise.
- Configure HTTP Event Collector for the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise deployment.
- Configure Amazon Kinesis Firehose to send data to the Splunk platform.
Select and prepare your distributed Splunk Enterprise deployment for the Splunk Add-on for Amazon Web Services¶
Before you install the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise, review the supported deployment topologies below. The diagrams show where the Splunk Add-on for Amazon Web Services should be installed for data collection in the supported distributed deployment topologies. The add-on is also installed on search heads for search-time functionality, but that is not shown in the diagrams.
Choose the deployment topology that works best for your situation.
Indexers in AWS VPC¶
If your indexers are on AWS Virtual Private Cloud, use an elastic load balancer to send data to your indexers.
Next step Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services
Indexers not in an AWS VPC¶
If your indexers are not in an AWS VPC, but are accessible from AWS Firehose via public IPs, install a CA-signed SSL certificate on each indexer, then send data directly to your indexers.
Prepare your indexers before you proceed: Install a CA-signed SSL certificate on each indexer. For instructions, see Configure Splunk indexing and forwarding to use TLS certificates in Securing Splunk Enterprise. Create a DNS name that resolves to the set of indexers that you plan to use to collect data from Amazon Kinesis Firehose. You will need this DNS name in a later step.
Next step Install the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise deployment
Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services¶
If your indexers are in an AWS Virtual Private Cloud, send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Follow the directions on this page to configure an ELB that can integrate with the Splunk HTTP event collector.
If your indexers are not in an AWS Virtual Private Cloud, this procedure does not apply to you. See Install the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise deployment.
Create an elastic load balancer¶
Follow these steps to configure your ELB properly to receive data. For more detailed information about Elastic Load Balancers, see Elastic Load Balancing Documentation in the AWS documentation.
Prerequisites Amazon Kinesis Firehose requires the HEC endpoint to be terminated with a valid CA-signed SSL certificate. Import your valid CA-signed SSL certificates to AWS Certificate Manager or AWS IAM before creating or modifying your elastic load balancer. See Configure Security Settings in the AWS documentation.
- Open the Amazon EC2 console.
- On the navigation pane, under Load balancing, select Load Balancers.
-
Create a classic load balancer with the following parameters:
Field in Amazon Web Services ELB UI Value Select load balancer type Classic load balancer Load balancer name Name of your load balancer Load balancer protocol HTTPS. Use the default or change the load balancer port. Assign or select a security group The chosen security group needs to allow inbound traffic from load balancer to HTTP event collector port on indexers. Configure security settings Select your CA-signed SSL certificate that you imported in the prerequisites step. Health Check settings Ping protocol: HTTPS
Ping port: 8088
Ping path: HTTPS:8088/services/collector/health/1.0
Timeout: 5 seconds
Interval: 30 seconds
Unhealthy threshold: 2
Healthy threshold: 10Add EC2 instances Add all indexers that you are using to index data with this add-on. -
Click Review and create, and verify in the following review page that your load balancer details are correct. After creating your elastic load balancer, modify the port configuration and the attributes as described below.
Modify an existing load balancer with the proper settings¶
Prerequisites An elastic load balancer that has been configured with the correct basic settings. This includes setting the load balancer protocol to HTTPS and uploading a valid CA-signed SSL certificate.
Steps From the Load balancers page in the EC2 console, select your elastic load balancer with the basic settings already configured. Modify your load balancer with the following parameters:
Field in Amazon Web Services ELB UI | Value |
---|---|
Health Check settings | Ping protocol: HTTPS Ping port: 8088 Ping path: HTTPS:8088/services/collector/health/1.0 Timeout: 5 seconds Interval: 30 seconds Unhealthy threshold: 2 Healthy threshold: 10 |
Port configuration | Under Edit stickiness, select Enable load balancer generated cookie stickiness. Leave the expiration period blank. |
Attributes | Under Edit idle timeout, enter 60 seconds. |
Configure HTTP event collector for the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise deployment¶
Prerequisites
- Install the Splunk Add-on for Amazon Web Services on a distributed Splunk Enterprise deployment
- If your indexers are in an Amazon VPC, Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services.
- For optimal performance, set
ackIdleCleanup
to true ininputs.conf
located in$SPLUNK_HOME/etc/apps/splunk_httpinput/local/inputs.conf
for *nix users and%SPLUNK_HOME%\etc\apps\splunk_httpinput\local\inputs.conf
for Windows users.
Steps
- Decide what index you want to use to collect your Amazon Kinesis Firehose data. Ensure that this index is enabled and active. Sending data to a disabled or deleted index results in dropped events. If you need to create a new index, see Create custom indexes in Managing Indexers and Clusters of Indexers.
- Set up the HTTP Event Collector on your distributed deployment. For instructions on how to configure the HTTP Event Collector and create a server class using the deployment server, see Scale HTTP Event Collector with distributed deployments. When you define the server class, specify all indexers that you want to use to collect Amazon Kinesis Firehose data.
- Enable the deployment server and push the configuration to the clients.
- On the deployment server, confirm that the Enable SSL box is checked in your HTTP Event Collector global settings.
- Create a new HTTP event collector token with indexer acknowledgments
enabled. For a detailed walkthrough, see
Create an Event Collector token
in Getting Data In. During the token configuration:
- Specify a Source type for your incoming data. See Source types for the Splunk Add-on for AWS for the source types supported by this add-on.
- Select the Index to which Amazon Kinesis Firehose will send data.
- Check the box next to Enable indexer acknowledgement.
- Save the token that Splunk Web provides. You need this token when you configure Amazon Kinesis Firehose.
- Repeat steps 5 and 6 for each additional source type from which you want to collect data. Each source type requires a unique HTTP event collector token.
Next Step Configure Amazon Kinesis Firehose to send data to the Splunk platform
Configure HTTP event collector for the Amazon Kinesis Firehose on a single-instance Splunk Enterprise deployment¶
Prerequisite Install the Splunk Add-on for Amazon Web Services on
a single-instance Splunk Enterprise deployment.
For optimal performance, set ackIdleCleanup
to true in inputs.conf
located in
$SPLUNK_HOME/etc/apps/splunk_httpinput/local/inputs.conf
for *nix
users and %SPLUNK_HOME%\etc\apps\splunk_httpinput\local\inputs.conf
for Windows users.
Steps
- Decide what index you want to use to collect your Amazon Kinesis Firehose data. Ensure that this index is enabled and active. Sending data to a disabled or deleted index results in dropped events. If you need to create a new index, see Create custom indexes in Managing Indexers and Clusters of Indexers.
- Go to Settings > Data inputs > HTTP Event Collector click Global Settings.
- Check the box next to Enable SSL, then click Save.
- Create an HTTP event collector token with indexer acknowledgments
enabled. For a detailed walkthrough, see
Set up and use the HTTP Event Collector
in Getting Data In. During the configuration:
- Specify a Source type for your incoming data. See Source types for the Splunk Add-on for AWS for the source types supported by this add-on.
- Select an Index to which Firehose will send data.
- Check the box next to Enable indexer acknowledgement.
- Save the token that Splunk Web provides. You need this token when you configure Amazon Kinesis Firehose.
- Repeat steps 4 and 5 for each additional source type from which you want to collect data. Each source type requires a unique HTTP event collector token.
Next Step Configure Amazon Kinesis Firehose to send data to the Splunk platform
Configure Amazon Kinesis Firehose to send data to the Splunk platform¶
Go to the AWS Management Console to configure Amazon Kinesis Firehose to send data to the Splunk platform. See Choose Splunk for Your Destination in the AWS documentation for step-by-step instructions. Repeat this process for each token that you configured in the HTTP event collector, or that Splunk Support configured for you.
When prompted during the configuration, enter the following information:
Field in Amazon Kinesis Firehose configuration page |
Value |
---|---|
Destination |
Select Splunk. |
Splunk cluster endpoint |
If you are using managed Splunk Cloud, enter your ELB URL in this
format: If you are on a distributed Splunk Enterprise deployment, enter the
URL and port of your data receiver node. For example, if you have an ELB
that proxies traffic to your indexers with DNS name
If you want to send data directly to multiple Splunk indexers acting
as your data collection nodes, you need a URL that resolves to multiple
IP addresses (one for each node) with the port enabled for HTTP event
collector on those nodes. For example, if the hostname that resolves to
your indexers is If you are on a single-instance Splunk Enterprise deployment, enter
the HEC endpoint URL and port. For example, if your HEC endpoint is |
Splunk endpoint type |
Select raw for most events using Kinesis Data Firehose. If your AWS Lambda function specifically makes your events into JSON format, then select event. For more information about preprocessing events, see Event formatting requirements. |
Authentication token |
Enter your HTTP event collector token that you configured or received from Splunk Support. |
S3 backup mode |
Best practice: Backup all events to S3 until you have validated that events are fully processed by the Splunk platform and available in Splunk searches. You can adjust this setting after you have verified data is searchable in the Splunk platform. |
After you configure Amazon Kinesis Firehose to send data to the Splunk platform, go to the Splunk search page and search for the source types of the data you are collecting. See Source types for the Splunk Add-on for AWS for a list of source types that this add-on applies to your Firehose data. Validate that the data is searchable in the Splunk platform before you adjust the S3 backup mode setting in the AWS Management Console.
If you are unable to see your data in the Splunk platform, see Troubleshoot the Splunk Add-on for Amazon Web Services.
Ended: Push-based (Amazon Kinesis Firehose) input configurations
Alert configuration ↵
Configure alerts for the Splunk Add-on for AWS¶
Complete the steps to configure and use the Simple Notification Service (SNS) alerts for the Splunk Add-on for Amazon Web Services (AWS):
- You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
- Configure AWS services for SNS alerts.
- Configure AWS permissions for SNS alerts
- Create an SNS alert search.
- Use the alert action.
To use the search commands and alert actions included with the Splunk Add-on for AWS, you must either be an administrator or a user with the appropriate capability:
list_storage_passwords
if you are using Splunk Enterprise 6.5.0 or higher.admin_all_objects
if you are using a version of Splunk Enterprise lower than 6.5.0.
Configure AWS permissions for SNS alerts¶
Required permissions for Amazon SNS:
Publish
Get*
List*
See the following sample inline policy to configure SNS alert permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:Publish",
"sns:Get*",
"sns:List*"
],
"Resource": "*"
}
]
}
Use the awssnsalert search command¶
Use the search command, awssnsalert
, to send alerts to AWS SNS.
The following example search demonstrates how to use this search command:
...\| eval message="My Message" \| eval entity="My Entity" \|
eval correlation_id="1234567890" \| awssnsalert account=real
region="ap-southeast-1" topic_name="ta-aws-sns-ingestion"
publish_all=1
Use the following table to create an SNS alert search. All attributes are required:
Attribute | Description |
---|---|
account |
The AWS account name configured in the add-on. |
region |
The AWS region name. |
topic_name |
The alert message is sent to this AWS SNS topic name. |
message |
The message that the Splunk Add-on for AWS sends to AWS SNS. |
publish_all |
You can set publish_all to 0 or 1. If you set publish_all=1 , the add-on sends all the records in this search. If you set publish_all=0 , the add-on sends only the first result to the search. The default value of this field is 0. |
Use the alert action¶
The Splunk Add-on for AWS supports automatic incident and event creation and incident update from custom alert actions. Custom alert actions are available in Splunk Enterprise version 6.3.0 and higher.
To create a new incident or event from a custom alert action, follow these steps:
- In Splunk Web, navigate to the Search & Reporting app.
- Write a search string that you want to use to trigger incident or event creation in AWS SNS. Click Save As > Alert.
- Fill out the Alert form. Give your alert a unique name and indicate whether the alert is a real-time alert or a scheduled alert. See Getting started with alerts in the Alerting Manual for more information.
- Under Trigger Actions, click Add Actions.
- From the list, select AWS SNS Alert if you want the alert to create an event in AWS SNS.
- Enter values for all required fields, as shown in the following table:
Field | Description |
---|---|
Account | Required. The account name configured in Splunk Add-on for AWS. |
Region | Required. The region of AWS SNS the events are sent to. Make sure the region is consistent with AWS SNS. |
Topic Name | Required. The name of the topic the events are sent to. Make sure the topic name exists in AWS SNS. |
Correlation ID | Optional. The ID that correlates this alert with the other events. If you leave this field empty, it uses $result.correlation_id$ by default. |
Entity | Optional. The object related to the event or alert, such as host, database, or EC2 instance. If you leave this field empty, Splunk Enterprise uses $result.entity$ by default. |
Source | Optional. The source of the event or alert. If you leave this field empty, Splunk Enterprise uses $result.source$ by default. |
Timestamp | Optional. The time of the event occurs. If you leave this field empty, the Splunk Enterprise uses $result._time$ by default. |
Event | Optional. The details of the event. If you leave this field empty, the Splunk Enterprise uses $result._raw$ by default. |
Message | Required. The message that the Splunk Add-on for AWS sends to AWS SNS. |
Ended: Alert configuration
Troubleshooting ↵
Troubleshoot the Splunk Add-on for AWS¶
Use the following information to troubleshoot the Splunk Add-on for Amazon Web Services (AWS). For helpful troubleshooting tips that you can apply to all add-ons see Troubleshoot add-ons, and Support and resource links for add-ons in the Splunk Add-ons manual.
Data collection errors and performance issues¶
You can choose dashboards from the Health Check menu to troubleshoot data collection errors and performance issues. See AWS Health Check Dashboards for more information.
Internal logs¶
You can directly access internal log data for help with troubleshooting. Data collected with these source types is used in the Health Check dashboards.
Data source | Source type |
---|---|
splunk_ta_aws_cloudtrail_cloudtrail_{input_name}.log. | aws:cloudtrail:log |
splunk_ta_aws_cloudwatch.log. | aws:cloudwatch:log |
splunk_ta_aws_cloudwatch_logs.log. | aws:cloudwatchlogs:log |
splunk_ta_aws_config_{input_name}.log. | aws:config:log |
splunk_ta_aws_config_rule.log. | aws:configrule:log |
splunk_ta_aws_inspector_main.log, splunk_ta_aws_inspector_app_env.log, splunk_ta_aws_inspector_proxy_conf.log, and splunk_ta_aws_inspector_util.log. | aws:inspector:log |
splunk_ta_aws_inspector_v2_main.log, splunk_ta_aws_inspector_v2_app_env.log, splunk_ta_aws_inspector_v2_proxy_conf.log, and splunk_ta_aws_inspector_v2_util.log. | aws:inspector:v2:log |
splunk_ta_aws_description.log. | aws:description:log |
splunk_ta_aws_metadata.log. | aws:metadata:log |
splunk_ta_aws_billing_{input_name}.log. | aws:billing:log |
splunk_ta_aws_generic_s3_{input_name}. | aws:s3:log |
splunk_ta_aws_logs_{input_name}.log, each incremental S3 input has one log file with the input name in the log file. | aws:logs:log |
splunk_ta_aws_kinesis.log. | aws:kinesis:log |
splunk_ta_aws_ sqs_based_s3_{input_name} . | aws:sqsbaseds3:log |
splunk_ta_aws_sns_alert_modular.log and splunk_ta_aws_sns_alert_search.log. | aws:sns:alert:log |
splunk_ta_aws_rest.log, populated by REST API handlers called when setting up the add-on or data input. | aws:resthandler:log |
splunk_ta_aws_proxy_conf.log, the proxy handler used in all AWS data inputs. | aws:proxy-conf:log |
splunk_ta_aws_s3util.log, populated by the S3, CloudWatch, and SQS connectors. | aws:resthandler:log |
splunk_ta_aws_util.log, a shared utilities library. | aws:util:log |
Configure log levels¶
- Click Splunk Add-on for AWS in the navigation bar on Splunk Web.
- Click Configuration in the app navigation bar.
- Click the Logging tab.
- Adjust the log levels for each of the AWS services as needed by
changing the default level of
INFO
toDEBUG
orERROR
.
These log level configurations apply only to runtime logs. Some REST endpoint logs from configuration activity log at DEBUG, and some validation logs log at ERROR. These levels cannot be configured.
Troubleshoot custom sourcetypes for SQS Based S3 inputs¶
Troubleshoot custom sourcetypes created with an SQS-based S3 input.
- If a custom sourcetype is used (for example,
custom_sourcetype
), it can be replaced. see the following steps:- Navigate to the Inputs page of the Splunk Add-on for AWS.
- Create a new SQS-Based S3 input, or edit an existing SQS-Based S3 input.
- Navigate to the Source Type input box, and change the sourcetype name.
- Save your changes.
- Adding a custom sourcetype will not split the events. To split
events, perform the following steps:
- Navigate to
Splunk_TA_aws/local/
. - Open
props.conf
with a text editor. - Add the following stanza:
[custom_sourcetype] SHOULD_LINEMERGE = false
- Save your changes.
- Navigate to
Low throughput for the Splunk Add-on for AWS¶
If you do not achieve the expected AWS data ingestion throughput, follow these steps to troubleshoot the throughput performance:
- Identify the problem in your system.
- Adjust the factors affecting performance.
- Verify whether performance meets your requirements.
-
Identify the problem in your system that prevents it from achieving a higher level of throughput performance. The problem in AWS data ingestion might be caused one of the following components:
- The amount of data the Splunk Add-on for AWS can pull in through API calls
- The heavy forwarder’s capacity to parse and forward data to the indexer tier, which involves the throughput of the parsing, merging, and typing pipelines
- The index pipeline throughput
To troubleshoot the indexing performance on the heavy forwarder and indexer, refer to Troubleshooting indexing performance in the Capacity Planning Manual.
-
Troubleshoot the performance of the problem component. If heavy forwarders or indexers are affecting performance, refer to the Summary of performance recommendations in the Splunk Enterprise Capacity Planning Manual. If the Splunk Add-on for AWS is affecting performance, adjust the following factors:
- Parallelization settings
To achieve optimal throughput performance, set the value of
parallelIngestionPipelines
to 2 in the server.conf file if your resource capacity permits. For information aboutparallelIngestionPipelines
, see Parallelization settings in the Splunk Enterprise Capacity Planning Manual. - AWS data inputs If you have sufficient resources, you can increase the number of inputs to improve throughput, but be aware that this also consumes more memory and CPU. Increase the number of inputs to improve throughput until memory or CPU is running short. If you are using SQS-based S3 inputs, you can horizontally scale data collection by configuring more inputs on multiple heavy forwarders to consume messages from the same SQS queue.
- Number of keys in a bucket For both the Generic S3 and Incremental S3 inputs, the number of keys or objects in a bucket can impact initial data collection performance. A large number of keys in a bucket requires more memory for S3 inputs in the initial data collection and limits the number of inputs you can configure in the add-on. If applicable, you can use log file prefix to subset keys in a bucket into smaller groups and configure different inputs to ingest them separately. For information about how to configure inputs to use log file prefix, see Configure Generic S3 inputs for the Splunk Add-on for AWS. For SQS-based S3 inputs, the number of keys in a bucket is not a primary factor since data collection can be horizontally scaled out based on messages consumed from the same SQS queue.
- File format Compressed files consume much more memory than plain text files.
- Parallelization settings
To achieve optimal throughput performance, set the value of
-
When you resolve the performance issue, see if the improved performance meets your requirements. If not, repeat the previous steps to identify the next bottleneck in the system and address it until you’re satisfied with the overall throughput performance.
Problem saving during account or input configuration¶
If you experience errors or trouble saving while configuring your AWS
accounts on the setup page, go to
$SPLUNK_HOME/etc/system/local/web.conf
and and change the following
timeout setting:
[settings]
splunkdConnectionTimeout = 300
Problems deploying with a deployment server¶
If you use a deployment server to deploy the Splunk Add-on for Amazon Web Services to multiple heavy forwarders, you must configure the Amazon Web Services accounts using the Splunk Web setup page for each instance separately because the deployment server does not support sharing hashed password storage across instances.
S3 issues¶
Troubleshoot the S3 inputs for the Splunk Add-on for AWS.
S3 input performance issues¶
You can configure multiple S3 inputs for a single S3 bucket to improve performance. The Splunk platform dedicates one process for each data input, so provided that your system has sufficient processing power, you can improve performance with multiple inputs. See Hardware and software requirements for the Splunk Add-on for AWS.
To prevent indexing duplicate data, don’t overlap the S3 key names in multiple inputs against the same bucket.
S3 key name filtering issues¶
Troubleshoot regex to fix filtering issues.
The deny and allow list matches the full key name, not just the last
segment. For example, allow list .*abc/.*
matches /a/b/abc/e.gz.
Your regex should match the full for whitelist and blacklist. For
example, if the directory of example
bucket is
cloudtrail/cloudtrail2
, the desired file is under the path
cloudtrail/cloudtrail2/abc.txt
, and you would like to ingest
abc.txt
, you need to specify the key_name
and whitelist
. See the
following example, which will ingest any files under the path:
cloudtrail/cloudtrail2
:
key_name = cloudtrail
whitelist = ^.\/cloudtrail2\/.$
- Watch “All My Regex’s Live in Texas” on Splunk Blogs.
- Read “About Splunk regular expressions” in the Splunk Enterprise Knowledge Manager Manual.
S3 event line breaking issues¶
If your indexed S3 data has incorrect line breaking, configure a custom source type in props.conf to control how the lines break for your events.
If S3 events are too long and get truncated, set TRUNCATE = 0
in
props.conf to prevent line truncating.
More more information, see Configure event line breaking in the Getting Data In manual.
S3 event Access Denied issue¶
For your configured SQS based S3 input in versions 6.0.0 and later of
the Splunk Add-on for AWS addon, if you face the error
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
,
verify the following:
- Check if the S3 bucket has versioning enabled or not.
- If the versioning is enabled for the S3 bucket, add the
s3:GetObjectVersion
permission to the account associated with the S3 bucket.
CloudWatch configuration issues¶
Troubleshoot your CloudWatch configuration.
API throttling issues¶
If you have a high volume of CloudWatch data, search
index=_internal Throttling
to determine if you are experiencing an API
throttling issue. If you are, contact AWS support to increase your
CloudWatch API rate. You can also decrease the number of metrics you
collect or increase the granularity of your indexed data in order to
make fewer API calls.
Granularity¶
If the granularity of your indexed data does not match your expectations, check that your configured granularity falls within what AWS supports for the metric you have selected. Different AWS metrics support different minimum granularities, based on the allowed sampling period for that metric. For example, CPUUtilization has a sampling period of 5 minutes, whereas Billing Estimated Charge has a sampling period of 4 hours.
If you configured a granularity that is less than the sampling period
for the selected metric, the reported granularity in your indexed data
reflects the actual sampling granularity but is labeled with your
configured granularity. Clear the local/inputs.conf
cloudwatch stanza
with the problem, adjust the granularity configuration to match the
supported sampling granularity so that newly indexed data is correct,
and reindex the data.
CloudTrail data indexing problems¶
If you are not seeing CloudTrail data in the Splunk platform, follow this troubleshooting process.
- Review the internal logs with the following search:
index=_internal source=*cloudtrail*
- Verify that the Splunk platform is connecting to SQS successfully by
searching for the string
Connected to SQS
. - Verify that the Splunk platform is processing messages successfully.
Look for strings with the following pattern:
X completed, Y failed while processing notification batch
. - Review your Amazon Web Services configuration to verify that SQS messages are being placed into the queue. If messages are being removed and the logs do not show that the input is removing them, then there might be another script or input consuming messages from the queue. Review your data inputs to ensure there are no other inputs configured to consume the same queue.
- Go to the AWS console to view CloudWatch metrics with the detail set to 1 minute to view the trend. For more details, see https://aws.amazon.com/blogs/aws/amazon-cloudwatch-search-and-browse-metrics-in-the-console/. If you see messages consumed but no Splunk platform inputs are consuming them, check for remote services that might be accessing the same queue.
- If your AWS deployment contains large S3 buckets with a large number
of subdirectories for 60 or more AWS accounts, perform one of the
following tasks:
- Enable SQS notification for each S3 bucket and switch to a SQS S3 input. This lets you add multiple copies of the input for scaling purposes.
- Split your inputs into one bucket per account and use multiple incremental inputs.
Billing Report issues¶
Troubleshoot the Splunk Add-on for AWS Billing inputs.
Problems accessing billing reports from AWS¶
If you have problems accessing billing reports from AWS, ensure that:
- There Billing Reports available on the S3 bucket you select when you configure the billing input,
- The AWS account you specify has the permission to read the files inside that bucket.
Problems understanding the billing report data¶
If you have problems understanding the billing report data, access the saved searches included with the add-on to analyze billing report data.
Problems configuring the billing data interval¶
The default billing data ingestion collection intervals for billing report data is designed to minimize license usage. Review the default behavior and make adjustments with caution.
Configure the interval by which the Splunk platform pulls Monthly and Detailed Billing Reports:
- In Splunk Web, go to the Splunk Add-on for AWS inputs screen.
- Create a new Billing input or click to edit your existing one.
- Click the Settings tab.
- Customize the value in the Interval field.
SNS alert issues¶
Because the modular input module is inactive, it cannot check whether the AWS is correctly configured or existing in the AWS SNS. If you cannot send a message to the AWS SNS account, you can perform the following procedures:
- Ensure the SNS topic name exists in AWS and the region ID is correctly configured.
- Ensure the AWS account is correctly configured in Splunk Add-on for AWS.
If you still have the issue, use the following search to check the log for AWS SNS:
Search
index=\_internal sourcetype=aws:sns:alert:log
Proxy settings for VPC endpoints¶
You must add each S3 region endpoint to the no_proxy
setting, and use
the correct hostname in your region:
s3.<your_aws_region>.amazonaws.com
. The no_proxy
setting does not
allow for any spaces between the IP addresses.
When using a proxy with VPC endpoints, check the proxy setting defined
in the splunk-launch.conf file located at
$SPLUNK_HOME/etc/splunk-launch.conf
. For example:
no_proxy = 169.254.169.254,127.0.0.1,s3.amazonaws.com,s3.ap-southeast-2.amazonaws.com
Certificate verify failed (_ssl.c:741) error message¶
If you create a new input, you might receive the following error
message:
certificate verify failed (_ssl.c:741)
Perform the following steps to resolve the error:
- Navigate to
$SPLUNK_HOME/etc/auth/cacert.pem
and open the cacert.pem file with a text editor. - Copy the text from your deployment’s proxy server certificate, and paste it into the cacert.pem file.
- Save your changes.
Internet restrictions prevent add-on from collecting AWS data¶
If your deployment has a security policy that doesn’t allow connection to the public internet from AWS virtual private clouds (VPCs), this might prevent the Splunk Add-on for AWS from collecting data from Cloudwatch inputs, S3 inputs, and other inputs which depend on access to AWS services.
To identify this issue in your deployment:
- Check if you have a policy that restricts outbound access to the public Internet from your AWS VPC.
- Identify if you have error messages that show that your attempts to
connect to sts.amazonaws.com result in a timeout. For example:
ConnectTimeout: HTTPSConnectionPool(host='sts.amazonaws.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<botocore.awsrequest.AWSHTTPSConnection object at 0x7fdfd97bc350>, 'Connection to sts.amazonaws.com timed out. (connect timeout=60)')) host = si3-splunk1 index = 0014000000kbznqaa1 source = /opt/splunkcoreengine/ce_customers/0014000000KBzNQAA1/1425528/si3-splunk1-sh_ds_ ls_-20190708-190819/log/splunk_ta_aws_aws_cloudwatch.log sourcetype = splunk_ta_ aws_aws_cloudwatch
To fix this issue in your deployment:
- Your VPC endpoint interface needs to be set up in your AWS environment. See the AWS documentation for details regarding VPC endpoints.
-
Update the Splunk instance that is being used for data collection to use your VPC endpoint as a gateway to allow connections to be established to your AWS services:
- In your Splunk instance, navigate to
./etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/data/endpoints.json
, and open using a text editor. -
Update the
hostname
to use the hostname of your VPC endpoint interface. For example:Before
After"sts": { "defaults": { "credentialScope": { "region": "us-east-1" }, "hostname": "sts.amazonaws.com"
"sts" : { "defaults" : { "credentialScope" : { "region" : "us-east-1" }, "hostname" : "<Enter VPC endpoint Interface DNS name here>"
-
Save your changes.
- In your Splunk instance, navigate to
./etc/apps/Splunk_TA_aws/bin/3rdparty/botocore/data/endpoints.json
, and open using a text editor. -
Update the
hostname
to use the hostname of your VPC endpoint interface. For example:Before
After"sts": { "defaults": { "credentialScope": { "region": "us-east-1" }, "hostname": "sts.amazonaws.com"
"sts" : { "defaults" : { "credentialScope" : { "region" : "us-east-1" }, "hostname" : "<Enter VPC endpoint Interface DNS name here>"
-
Save your changes.
- In your Splunk instance, navigate to
-
Restart your Splunk instance.
- Validate that the connection to your VPC has been established.
Failed to load input and configuration page when running the Splunk software on a custom management port¶
If the Splunk software fails to load input and configuration page while
running on the custom management port (for example,
<IP>:<CUSTOM_PORT>
), perform the following troubleshooting steps.
- Navigate to
$SPLUNK_HOME/etc/
- Open
splunk-launch.conf
using a text editor. - Add the environment variable
SPLUNK_MGMT_HOST_PORT=<IP>:<CUSTOM_PORT>
- Save your changes.
- Restart your Splunk instance.
Amazon Kinesis Firehose error exceptions¶
See Data Not Delivered to Splunk in the AWS documentation.
Amazon Kinesis Firehose data delivery errors¶
You can view the error logs related to Kinesis Firehose data delivery failure using the Kinesis Firehose console or CloudWatch console. See the Accessing CloudWatch Logs for Kinesis Firehose section in the Monitoring with Amazon CloudWatch Logs topic from the AWS documentation.
SSL-related data delivery errors¶
Amazon Kinesis Firehose requires HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint. If you are seeing an error message “Could not connect to the HEC endpoint. Make sure that the HEC endpoint URL is valid and reachable from Kinesis Firehose,” then your SSL certificate might not be valid.
Test if your SSL certificate is valid by opening your HEC endpoint in a web browser. If you are using a self-signed certificate, you will receive an error in your browser. For example, in Google Chrome, the error looks like:
Amazon Kinesis Firehose Error: “Received event for unconfigured/disabled/deleted index” but indexer acknowledgement is returning positives¶
If you see this error in messages or logs, edit your HEC token configurations to send data to an index that is able to accept data.
If indexer acknowledgment for your Amazon Kinesis Firehose data is successful but your data is not successfully indexed, the data may have been dropped by the parsing queue as an unparseable event. This is expected behavior when data is processed successfully in the input phase but cannot be parsed due to a logical error. For example, if the HTTP event collector is routing data to an index that has been deleted or disabled, the Splunk platform will still accept the data and begin processing it, which triggers indexer acknowledgment to confirm receipt. However, the parsing queue cannot pass the data to the index queue because the specified index is not available, thus the data does not appear in your index. For more information about the expected behavior of the indexer acknowledgment feature, see About HTTP Event Collector Indexer Acknowledgment.
If you suspect events have been dropped, search your “last chance”
index, if you have one configured. If you are on Splunk Cloud, contact
Splunk Support if you do not know the name of your last chance index. If
you are on Splunk Enterprise, see the lastChanceIndex
setting in
indexes.conf
for more information about the behavior of the last chance index feature
and how to configure it.
Troubleshoot performance with the Splunk Monitoring Console¶
For Splunk Cloud Platform, see Introduction to the Cloud Monitoring Console. For Splunk Enterprise, see About the Monitoring Console.
Queue fill dashboard¶
If you are experiencing performance issues with your HEC server, you may need to increase the number of HEC-enabled indexers to which your events are sent.
Use the Monitoring Console to determine the queue fill pattern. Follow these steps to check whether your indexers are at capacity.
Steps Navigate to either Monitoring Console > Indexing > Performance > Indexing Performance: Deployment or Monitoring Console > Indexing > Performance > Indexing Performance: Instance. From the Median Fill Ratio of Data Processing Queues dashboard, select Indexing queue from the Queue dropdown and 90th percentile from the Aggregation dropdown. (Optional) Set a Platform Alert to get a notification when one or more of your indexer queues reports a fill percentage of 90% or more. This alert can inform you of potential indexing latency. From Paid Splunk Cloud, navigate to Settings > Searches, reports, and alerts and select Monitoring Console in the app filter. Find the SIM Alert - Abnormal State of Indexer Processor platform alert, and click Edit > Enable to enable the alert. From the Splunk Enterprise Monitoring Console Overview page, click Triggered Alerts > Enable or Disable and then click the Enabled checkbox next to the SIM Alert - Abnormal State of Indexer Processor platform alert. See determine queue fill pattern for an example of a healthy and unhealthy queue.
HTTP Event Collector dashboards¶
The Monitoring Console also comes with pre-built dashboards for monitoring the HTTP Event Collector. To interpret the HTTP event collector dashboards information panels correctly, be aware of the following:
The Data Received and Indexed panel shows data as “indexed” even when the data is sent to a deleted or disabled index. Thus, this graph shows the data that is acknowledged by the indexer acknowledgment feature, even if that data is not successfully indexed. See the Error: ‘Received event for unconfigured/disabled/deleted index’ but indexer acknowledgment is returning positive section of this topic for more information about the expected behavior of the indexer acknowledgment feature when the index is not usable. The Errors panel is expected to show a steady stream of errors under normal operation. These errors occur because Amazon Kinesis Firehose sends empty test events to check that the authentication token is enabled, and the HTTP event collector cannot parse these empty events. Filter the Errors panel by Reason to help find significant errors.
For more information about the specific HTTP event collector dashboards, see HTTP Event Collector dashboards.
The HTTP event collector dashboards show all indexes, even if they are disabled or have been deleted.
Amazon Kinesis Firehose Kinesis timestamp issues¶
If your Kinesis events are ingested with the wrong timestamp, perform the following troubleshooting steps to disable the Splunk software’s timestamp extraction feature.
- Stop your Splunk instance.
- Navigate to
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/local
. - Open the
props.conf
file using a text editor. - In the
props.conf
file, locate the stanza for the Kinesis sourcetype. If it doesn’t exist, create one with the Kinesis sourcetype. - Inside the Kinesis sourcetype stanza, add
DATETIME_CONFIG = NONE
. - Save your changes.
- Restart your Splunk instance.
Metadata WAFv2 API “The scope is not valid” error¶
If you encounter the following error: Error Log:
botocore.errorfactory.WAFInvalidParameterException: An error occurred (WAFInvalidParameterException) when calling the ListLoggingConfigurations operation: Error reason: The scope is not valid., field: SCOPE_VALUE, parameter: CLOUDFRONT
Review the following APIs, then select the region as “us-east-1”(N. Virginia) because “us-east-1” is the only region supported by these APIs.
- wafv2_list_available_managed_rule_group_versions_cloudfront
- wafv2_list_logging_configurations_cloudfront
- wafv2_list_ip_sets_cloudfront
Metadata Input - Data is not getting collected for “s3_buckets” and “iam_users” API¶
If, when using the Metadata input, Data is not getting collected for the
s3_buckets
and iam_users
APIs, select the region which is enabled
from the AWS portal side when you originally created the input in the
first place. The s3_buckets
and iam_users
APIs are GLOBAL APIs and
are using the first selected region for all their API calls.
Config Rules input - Data is not getting collected¶
If data is not getting collected for the Config Rules input, check if any of the following error messages are found in the log file.
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the DescribeConfigRules operation: 1 validation error detected: Value '[....]' at 'configRuleNames' failed to satisfy constraint: Member must have length less than or equal to 25
CloudWatch Logs ModInput Ingestion Delay¶
CloudWatch Logs ModInput uses lastEventTimestamp
from the response of the AWS boto3 SDK method describe_log_streams
to determine whether new data is available or not for the ingestion. As per the AWS Boto3 describe_log_streams
API response lastEventTimestamp
field description, lastEventTimestamp
value is updated on an eventual consistency basis. It updates in less than an hour from ingestion in the AWS CloudWatch Stream, and some time it takes more than one hour. For more details please check the descriobe_logs_stream
- Boto3 API documentation. Based on this AWS behaviour, delay is expected in the modular input. It is recommended to use the Push-Based mechanism to ingest CloudWatch Logs data in order to avoid ingestion delay.
Facing issues with aws:firehose:json
sourcetype extarctions¶
If you are collecting the data in the aws:firehose:json
sourcetype by configuring the HEC on a search head instead of on an IDM/HF on a Classic Splunk Cloud Platform environment, then due to partitioned builds, all the extractions will not be present on your search head. Therefore you should add required extractions in your desired sourcetype from your search head’s UI (Settings -> Source types). This will push the changes to the indexers as well.
Getting warning message while collecting metric data through VPC Flow Logs input¶
If you have configured the VPC Flow Logs input by selecting metric index and you encounter warning in splunk message tray or splunkd logs stating The metric event is not properly structured, source=xyz, sourcetype=metric, host=xyz, index=xyz. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form “metric_name:
then make sure that the data you are trying to ingest is of VPC Flow Logs otherwise it will not get ingested in the metric index.
Ended: Troubleshooting
Reference ↵
Access billing data for the Splunk Add-on for AWS¶
Use the billing input in the Splunk Add-on for Amazon Web Services (AWS) to collect your AWS billing reports, then extract useful information from them using pre-built reports included with this add-on. The pre-built reports are based on AWS report formats. You can use these reports as examples of how to use the Splunk platform to explore your other S3 data.
For more information on how to configure Billing inputs for the Splunk Add-on for AWS, see the Configure Billing inputs for the Splunk Add-on for AWS topic in this manual.
The Billing input does not collect billing reports for your AWS Marketplace charges.
Billing report types¶
See information about Monthly reports, Monthly cost allocation reports, Detailed billing reports, and Detailed billing reports with resources and tags.
Monthly report¶
The Monthly report lists AWS usage for each product dimension used by an account and its Identity Access Management (IAM) users in monthly line items. You can download this report from the Bills page of the Billing and Cost Management console.
This report takes the following file name format:
<AWS account number>-aws-billing-csv-yyyy-mm.csv
This report is small in size, so the add-on pulls the entire report once daily to get the latest snapshot.
Monthly cost allocation report¶
The Monthly cost allocation report contains the same data as the monthly
report as well as any cost allocation tags that you create. Monthly
reports have the event type aws_billing_monthly_report
. You must
obtain this report from the Amazon S3 bucket that you specify. Standard
AWS storage rates apply.
This report takes the following file name format:
<AWS account number>-aws-cost-allocation-yyyy-mm.csv
Detailed billing report¶
The Detailed billing report lists AWS usage for each product dimension
used by an account and its IAM users in hourly line items. Detailed
billing reports have the event type aws_billing_detail_report
. You
must obtain this report from the Amazon S3 bucket that you specify.
Standard AWS storage rates apply.
This report takes the following file name format:
<AWS account number>-aws-billing-detailed-line-items-yyyy-mm.csv.zip
Detailed billing report with resources and tags¶
The Detailed billing report with resources and tags contains the same data as the detailed billing report, but also includes any cost allocation tags you have created and ResourceIDs for the AWS resources used by your account. You must obtain this report from the Amazon S3 bucket that you specify. Standard AWS storage rates apply.
This report takes the following file name format:
<AWS account number>-aws-billing-detailed-line-items-with-resources-and-tags-yyyy-mm.csv.zip
Access preconfigured reports¶
The Splunk Add-on for AWS includes several reports based on the indexed
billing report data. You can find these saved reports in Splunk Web by
clicking Home > Reports and looking for items with the prefix
AWS Bill -
. Some of the saved searches return a table. Others return a
single value, such as AWS Bill - Total Cost till Now
.
The Splunk platform typically indexes multiple monthly report snapshots.
To obtain the most recent monthly report snapshot, click Home >
Reports and open the saved report called
AWS Bill - Monthly Latest Snapshot
. Or, search for it using the search
string: | savedsearch "AWS Bill - Monthly Latest Snapshot"
You can obtain the most recent detailed report by clicking Home >
Reports and opening the saved report called AWS Bill - Daily Cost
.
Or, search for it using the search string:
Search
| savedsearch "AWS Bill - Daily Cost"
Searching against detailed reports can be slow due to the volume of data in the report. Accelerate the searches against detailed reports.
Report sources¶
These saved reports are based on AWS Billing Reports instead of the
billing metric data in CloudWatch. By default, Total or Monthly reports
are based on data indexed from the AWS Monthly Reports
(*-aws-billing-csv-yyyy-mm.csv
or *-aws-cost-allocation-yyyy-mm.csv
)
on the S3 bucket, while Daily reports are based on AWS Detail Reports
(*-aws-billing-detailed-line-items-yyyy-mm.csv.zip
or
*-aws-billing-detailed-line-items-with-resources-and-tags-yyyy-mm.csv.zip
).
Default index behavior¶
By default, reports look for data in the default index, main
. If you
changed the default index when you configured the data input, the
reports will not work unless you include the index in the default search
indexes list or change the two reports so they filter to the custom
index.
To include a custom index in the default search indexes list, perform the following steps:
- Click Settings > Users and authentication > Access controls > Roles > [Role that uses the saved searches] > Indexes searched by default.
- Add the custom index to the default search indexes list.
- Repeat for each role that uses the saved searches.
To change the saved searches to filter to a custom index, perform the following steps:
- Open the saved search
AWS Bill - Monthly Latest Snapshot
. - Add a filter to specify the index you configured. For example,
index=new_index
. - Save your changes to the saved search.
- Repeat these steps for the other saved search,
AWS Bill - Detailed Cost
.
API reference for the Splunk Add-on for AWS¶
See the following sections for API reference information for the Splunk Add-on for AWS.
AWS Account¶
Manage or configure AWS accounts in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account/<account_name>
GET, POST, or DELETE
API for AWS Account settings.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode |
- | If output_mode=json, response is returned in JSON format. |
Request body parameters
Name | Required | Type | Description |
---|---|---|---|
name |
1 | - | Unique name for AWS account |
key_id |
1 | - | AWS account key id |
secret_key |
1 | - | AWS account secret key |
category |
1 | 1 | AWS account region category. Specify either 1, 2, or 4 (1 = Global, 2 = US Gov, 4 = China) |
Examples
GET | List of all accounts | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account |
List specified account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account/test_account |
|
POST | Create account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account -d name=test_account -d key_id=<aws_account_key_id> -d secret_key=<aws_account_secret_key> -d category=1 |
Edit account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account/test_account -d key_id=<aws_account_key_id> -d secret_key=<aws_account_secret_key> -d category=1 |
|
DELETE | Delete account | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_account/test_account |
AWS Private Account¶
Manage or configure AWS private accounts in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account/<private_account_name>
GET, POST, or DELETE
API for AWS Private Account settings.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json, response is returned in JSON format. |
Request body parameters
Name | Required | Default | Description |
---|---|---|---|
name |
1 | - | Unique name for AWS private account |
key_id |
1 | - | AWS private account key id |
secret_key |
1 | - | AWS private account secret key |
category |
1 | - | AWS private account region category. Specify either 1, 2, or 4 (1 = Global, 2 = US Gov, 4 = China) |
sts_region |
1 if using private endpoint | - | AWS region to be used for api calls of STS service |
private_endpoint_enabled |
0 | - | Whether to use user provided AWS private endpoints for making api calls to AWS services. Specify either 0 or 1 |
sts_private_endpoint_url |
1 if using private endpoint | - | Required if private_endpoint_enabled=1. AWS private endpoint url |
Examples
GET | List of all accounts | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account |
List specified account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account/test_private_account |
|
POST | Create account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account -d name=test_private_account -d key_id=<aws_account_key_id> -d secret_key=<aws_account_secret_key> -d category=1 -d sts_region=ap-south-1 -d private_endpoint_enabled=1 -d sts_private_endpoint_url=<encode from actual value → https://vpce-endpoint_id-unique_id.sts.region.vpce.amazonaws.com> |
Edit account | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account/test_private_account -d key_id=<aws_account_key_id> -d secret_key=<aws_account_secret_key> -d category=1 -d sts_region=ap-northeast-1 -d private_endpoint_enabled=1 -d sts_private_endpoint_url=<encode from actual value → https://vpce-endpoint_id-unique_id.sts.region.vpce.amazonaws.com> |
|
DELETE | Delete account | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_private_account/test_private_account |
AWS IAM Role¶
Manage or configure AWS IAM Role in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles/<iam_role_name>
GET, POST, or DELETE
API for AWS IAM Role Account settings.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json, response is returned in JSON format. |
Request body parameters
Name | Required | Default | Description |
---|---|---|---|
name |
1 | - | Unique name for AWS IAM role |
arn |
1 | - | AWS IAM role ARN |
Examples
GET | List of all iam roles | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles |
List specified iam role | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles/test_iam_role |
|
POST | Create iam role | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles -d name=test_iam_role -d arn=<encode from actual value → arn:aws:iam::aws_account_id:role/AWSTestIAMRole> |
Edit iam role | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles/test_iam_role -d arn=<encode from actual value → arn:aws:iam::aws_account_id:role/AWSTestIAMRole> |
|
DELETE | Delete iam role | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_iam_roles/test_iam_role |
Billing (Cost and Usage Report)¶
Manage or configure Billing (Cost and Usage Report) inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur/<billing_cur_input_name>
GET, POST, or DELETE
API for the AWS Billing (Cost and Usage) input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role name |
aws_s3_region |
0 | - | Region to connect with s3 service using regional endpoint |
bucket_region |
0 | - | Region of AWS s3 bucket |
bucket_name |
1 | - | Name of s3 bucket where reports are delivered to |
report_prefix |
0 | - | Prefixes used to allow AWS to deliver reports into a specified folder |
report_names |
0 | - | Regex used to filter reports by name |
temp_folder |
0 | - | Full path to a non-default folder for temporarily storing downloaded detailed billing report .zip files |
start_date |
0 | 90 days before input is configured | Collect data after this time. Format = %Y-%m |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
s3_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with s3 service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with STS service. |
interval |
0 | 86400 | Data collection interval, in seconds |
sourcetype |
0 | aws:billing:cur | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur/test_billing_cur_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur -d name=test_billing_cur_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_s3_region=ap-south-1 -d bucket_name=testing-bucket-05 -d bucket_region=ap-south-1 -d report_prefix=test_report -d report_names=test_report_name.* -d temp_folder=test_temp_folder -d interval=1800 -d start_date=2023-01 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur/test_billing_cur_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_s3_region=ap-south-1 -d bucket_name=testing-bucket-05 -d bucket_region=ap-south-1 -d report_prefix=test_report -d report_names=test_report_name.* -d temp_folder=test_temp_folder -d interval=1800 -d start_date=2023-01 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing_cur/test_billing_cur_input |
Billing (Legacy)¶
Manage or configure Billing (Legacy) inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing/<billing_input_name>
GET, POST, or DELETE
API for the Billing (Legacy) input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role name |
aws_s3_region |
0 | - | Region to connect with s3 service using regional endpoint |
host_name |
0 | - | Host name of s3 service (s3.amazonaws.com) |
bucket_name |
1 | - | S3 bucket name which is configured to hold billing reports |
monthly_report_type |
0 | Monthly cost allocation report | Monthly report type. Specify either of the following:
|
detail_report_type |
0 | Detailed billing report with resources and tags | Detail report type. Specify either of the following:
|
temp_folder |
0 | - | Full path to a non-default folder for temporarily storing downloaded detailed billing report .zip files |
report_file_match_reg |
0 | - | Regex for report selection. This expression overrides monthly_report_type and detail_report_type |
recursion_depth |
0 | - | Recursion depth in count when iterating child files and folders |
monthly_timestamp_select_column_list |
0 | - | Fields of timestamp extracted from monthly report, separated by | |
detail_timestamp_select_column_list |
0 | - | Fields of timestamp extracted from detail report, separated by | |
time_format_list |
0 | - | Time format extract from existing report, separated by |. Ex %Y-%m-%d %H:%M:%S |
max_file_size_csv_in_bytes |
50 MB | - | Max file size in CSV file format |
max_file_size_csv_zip_in_bytes |
1 GB | - | Max file size in CSV zip format |
header_look_up_max_lines |
0 | - | Max lines to look up header of billing report |
header_magic_regex |
0 | - | Regex of header to look up |
monthly_real_timestamp_extraction |
0 | - | For monthly report, regex to extract real timestamp in the report |
monthly_real_timestamp_format_reg_list |
0 | - | For monthly report, regex to match the format of real time string, seperated by | |
initial_scan_datetime |
0 | - | Timestamp for initial scan. Format = %Y-%m-%dT%H:%M:%SZ |
interval |
0 | 86400 | Data collection interval, in seconds |
sourcetype |
0 | aws:billing | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing/test_billing_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing -d name=test_billing_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_s3_region=ap-south-1 -d host_name=s3.amazonaws.com -d bucket_name=testing-bucket-05 -d monthly_report_type="Monthly cost allocation report" -d detail_report_type="Detailed billing report with resources and tags" -d temp_folder=test_temp_folder -d recursion_depth=2 -d initial_scan_datetime=<encode from actual value → 2023-01-01T00:00:00Z> -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing/test_billing_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_s3_region=ap-south-1 -d host_name=s3.amazonaws.com -d bucket_name=testing-bucket-05 -d monthly_report_type="Monthly cost allocation report" -d detail_report_type="Detailed billing report" -d temp_folder=test_temp_folder -d recursion_depth=1 -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_billing/test_billing_input |
Cloudtrail¶
Manage or configure Cloudtrail inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail/<cloudtrail_input_name>
GET, POST, or DELETE
API for the AWS Cloudtrail input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_region |
1 | - | AWS region to collect data from |
sqs_queue |
1 | - | Name of the queue where AWS sends new Cloudtrail log notifications |
remove_files_when_done |
0 | 0 | Boolean value indicating whether Splunk should delete log files from S3 bucket after indexing |
exclude_describe_events |
0 | 1 | Boolean value indicating whether or not to exclude certain events, such as read-only events that can produce high volume of data |
blacklist |
0 | - | A PCRE regex that specifies event names to exclude if exclude_describe_events is set to True. Leave blank to use default regex ^(?:Describe |
excluded_events_index |
0 | - | Splunk index to put excluded events. Default is empty which discards the events |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
s3_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with s3 service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with STS service. |
sqs_private_endpoint_enabled |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with sqs service service. |
interval |
0 | 30 | Data collection interval, in seconds |
sourcetype |
1 | aws:cloudtrail | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail` |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail/test_cloudtrail_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail -d name=test_cloudtrail_input -d aws_account=test_account -d aws_region=ap-south-1 -d sqs_queue=test_queue -d remove_files_when_done=0 -d exclude_describe_events=1 -d blacklist=<encode from actual value → test/.*> -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail/test_cloudtrail_input -d aws_account=test_account -d aws_region=ap-south-1 -d sqs_queue=test_queue -d remove_files_when_done=1 -d exclude_describe_events=1 -d blacklist=<encode from actual value → test/.*> -d excluded_events_index=test_idx -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail/test_cloudtrail_input |
Cloudtrail Lake¶
Manage or configure Cloudtrail inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake/<cloudtrail_input_name>
GET, POST, or DELETE
API for the Cloudtrail input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json, response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input |
aws_account |
1 | - | AWS account name |
aws_iam_role |
0 | - | AWS IAM role |
aws_region |
1 | - | AWS region to collect data from |
input_mode |
1 | continuously_monitor | Input mode whether to collect data continuously or at once. |
event_data_store |
1 | - | The cloudtrail lake event data store from which the data will be collected. |
start_date_time |
1 | 7 days ago | Start date/time to specify how far back to go when initially collecting data. |
end_date_time |
1 if input_mode is index_once | - | End date/time to specify upto which date input should collect the data. |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1 |
cloudtrail_private_endpoint_url |
1 if private_endpoint_enabled=1 | - | Private endpoint url to connect with cloudtrail service |
sts_private_endpoint_url |
1 if private_endpoint_enabled=1 | - | Private endpoint url to connect with sts service |
query_window_size |
1 | 15 | This parameter is used to control the chunk size. |
delay_throttle |
0 | 5 | This parameter specifies how close to “now” the end date for a query may be (where “now” is the time that the input runs). |
interval |
0 | If input_mode is continuously_monitor then 3600 | else -1 |
sourcetype |
0 | aws:cloudtrail:lake | Sourcetype of collected data |
index |
1 | default | Splunk index to ingest data. Default is main |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake/test_cloudtrail_lake_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake -d name=test_cloudtrail_lake_input -d aws_account=test_account -d aws_region=ap-south-1 -d input_mode=continuously_monitor -d event_data_store=test_data_store -d start_date_time=2024-04-22T06:50:03 -d query_window_size=15 -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake/test_cloudtrail_lake_input -d name=test_cloudtrail_lake_input -d aws_account=test_account -d aws_region=ap-south-1 -d input_mode=continuously_monitor -d event_data_store=test_data_store -d start_date_time=2024-04-22T06:50:03 -d query_window_size=15 -d interval=3600 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudtrail_lake/test_cloudtrail_lake_input |
Cloudwatch¶
Manage or configure Cloudwatch inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch/<cloudwatch_input_name>
GET, POST, or DELETE
API for the AWS Cloudwatch input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
aws_region |
1 | - | AWS region to collect data from |
metric_namespace |
0 | - | Cloudwatch metric namespace, for example AWS/EBS |
metric_names |
0 | 1800 | The input will query the CloudWatch Logs events no later than |
only_after |
0 | - | Cloudwatch metric names in JSON array |
metric_dimensions |
0 | - | Cloudwatch metric dimensions |
statistics |
0 | - | Cloudwatch metric statistics, Specify either of Average, Sum, SampleCount, Maximum, Minimum |
period |
0 | 300 | Cloudwatch metrics granularity, in seconds |
use_metric_format |
0 | false | Boolean indicating whether to transform data to metric format |
metric_expiration |
0 | 3600 | How long the discovered metrics would be cached for, in seconds |
query_window_size |
0 | 7200 | How far back to retrieve data points for, in number of data points |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
monitoring_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with monitoring service. |
s3_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with s3 service. |
ec2_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with ec2 service. |
elb_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with elb service. |
lambda_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with lambda service. |
autoscaling_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with autoscaling service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with STS service. |
polling_interval |
0 | 600 | Data collection interval. |
sourcetype |
1 | aws:cloudwatch | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch` |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch/test_cloudwatch_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch -d name=test_cloudwatch_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_region=<encode from actual value → ap-south-1,ap-northeast-1> -d metric_namespace=<encode from actual value → ["AWS/ApiGateway","AWS/ApiGateway","AWS/ApiGateway","AWS/EC2","AWS/EC2","AWS/EC2","AWS/EC2"]> -d metric_names=<encode from actual value → ["\".*\"","[\"CacheHitCount\",\"5XXError\"]","[\"4XXError\"]","\".*\"","[\"DiskReadBytes\",\"CPUUtilization\"]","[\"CPUUtilization\",\"DiskWriteBytes\"]","\".*\""]> -d metric_dimensions=<encode from actual value → ["[{\"ApiName\":[\".*\"],\"Stage\":[\".*\"]}]","[{\"ApiName\":[\".*\"],\"Method\":[\".*\"],\"Resource\":[\".*\"],\"Stage\":[\".*\"]}]","[{\"ApiName\":[\".*\"]}]","[{\"ImageId\":[\".*\"]}]","[{\"InstanceId\":[\".*\"]}]","[{\"AutoScalingGroupName\":[\".*\"]}]","[{\"InstanceType\":[\".*\"]}]"]> -d statistics=<encode from actual value → ["[\"Average\",\"Sum\",\"SampleCount\",\"Maximum\",\"Minimum\"]","[\"Average\",\"Sum\"]","[\"Maximum\",\"SampleCount\"]","[\"Average\",\"SampleCount\",\"Maximum\"]","[\"SampleCount\",\"Maximum\",\"Minimum\"]","[\"Average\"]","[\"Average\",\"Sum\",\"SampleCount\",\"Maximum\",\"Minimum\"]"]> -d period=300 -d use_metric_format=false -d metric_expiration=3600 -d query_window_size=7200 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch/test_cloudwatch_input -d aws_region=<encode from actual value → ap-south-1,ap-northeast-1> -d metric_namespace=<encode from actual value → ["AWS/ApiGateway","AWS/ApiGateway","AWS/ApiGateway","AWS/EC2","AWS/EC2","AWS/EC2","AWS/EC2"]> -d metric_names=<encode from actual value → ["\".*\"","[\"CacheHitCount\",\"5XXError\"]","[\"4XXError\"]","\".*\"","[\"DiskReadBytes\",\"CPUUtilization\"]","[\"CPUUtilization\",\"DiskWriteBytes\"]","\".*\""]> -d metric_dimensions=<encode from actual value → ["[{\"ApiName\":[\".*\"],\"Stage\":[\".*\"]}]","[{\"ApiName\":[\".*\"],\"Method\":[\".*\"],\"Resource\":[\".*\"],\"Stage\":[\".*\"]}]","[{\"ApiName\":[\".*\"]}]","[{\"ImageId\":[\".*\"]}]","[{\"InstanceId\":[\".*\"]}]","[{\"AutoScalingGroupName\":[\".*\"]}]","[{\"InstanceType\":[\".*\"]}]"]> -d statistics=<encode from actual value → ["[\"Average\",\"Sum\",\"SampleCount\",\"Maximum\",\"Minimum\"]","[\"Average\",\"Sum\"]","[\"Maximum\",\"SampleCount\"]","[\"Average\",\"SampleCount\",\"Maximum\"]","[\"SampleCount\",\"Maximum\",\"Minimum\"]","[\"Average\"]","[\"Average\",\"Sum\",\"SampleCount\",\"Maximum\",\"Minimum\"]"]> -d period=300 -d use_metric_format=false -d metric_expiration=3600 -d query_window_size=7200 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch/test_cloudwatch_input |
Cloudwatch Logs¶
Manage or configure Cloudwatch Logs inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs/<cloudwatch_logs_input_name>
GET, POST, or DELETE
API for the AWS Cloudwatch Logs input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
region |
1 | - | AWS region to collect data from |
groups |
1 | - | Log group names to get data from, split by comma (,) |
only_after |
0 | 1970-01-01T00:00:00 | Only events after the specified GMT time will be collected. Format = %Y-%m-%dT%H:%M:%S |
stream_matcher |
0 | .* | Regex to match log stream names for ingesting events |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
sqs_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with SQS service. |
logs_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with logs service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with STS service. |
interval |
0 | 600 | Data collection interval. |
sourcetype |
1 | aws:cloudwatchlogs | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
metric_index_flag |
0 | No | Whether to use metric index or event index. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs/test_cloudwatch_logs_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs -d name=test_cloudwatch_logs_input -d account=test_account -d region=ap-south-1 -d groups=<encode from actual value → test-group-1,test-group-2> -d only_after=<encode from actual value → 2023-01-01T00:00:00> -d stream_matcher=<encode from actual value → test-stream.*> -d interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs/test_cloudwatch_logs_input -d account=test_account -d region=ap-south-1 -d groups=<encode from actual value → test-group-1,test-group-2> -d only_after=<encode from actual value → 2023-01-01T00:00:00> -d delay=900 -d stream_matcher=<encode from actual value → test-stream.*> -d interval=300 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_cloudwatch_logs/test_cloudwatch_logs_input |
Config inputs¶
Manage or configure Config inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config/<config_input_name>
GET, POST, or DELETE
API for the AWS Config input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default | Description |
---|---|---|---|
name |
1 | - | Unique name for input |
aws_account |
1 | - | AWS account name |
aws_region |
1 | - | AWS regions to collect data from |
sqs_queue |
1 | - | Sqs queue names where AWS sends Config notifications |
enable_additional_notifications |
0 | 0 | Deprecated |
polling_interval |
0 | 30 | Data collection interval, in seconds |
sourcetype |
0 | aws:config | Sourcetype of collected data |
index |
1 | default | Splunk index to ingest data. Default is main |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config/test_config_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config -d name=test_config_input -d aws_account=test_account -d aws_region=<encode from actual value → ["ap-south-1","ap-south-1"]> -d sqs_queue=<encode from actual value → ["test-queue-1","-test-queue-2"]> -d polling_interval=30 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config/test_config_input -d aws_account=test_account -d aws_region=<encode from actual value → ["ap-south-1","ap-northeast-1"]> -d sqs_queue=<encode from actual value → ["test-queue-1","-test-queue-2"]> -d polling_interval=30 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config/test_config_input |
Config Rules inputs¶
Manage or configure Config Rules inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule/<config_rule_input_name>
GET, POST, or DELETE
API for the AWS Config input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Name | Required | Default | Description |
---|---|---|---|
name |
1 | - | Unique name for input |
account |
1 | - | AWS Account |
aws_iam_role |
0 | - | AWS IAM role |
region |
1 | - | JSON array specifying list of regions |
rule_names |
0 | - | JSON array specifying rule names. Leave blank to select all rules |
polling_interval |
0 | 300 | Data collection interval, in seconds |
sourcetype |
0 | aws:config:rule | Sourcetype of collected data |
index |
1 | default | Splunk index to ingest data. Default is main |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule/test_config_rule_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule -d name=test_config_rule_input -d account=test_account -d aws_iam_role=test_iam_role -d region=<encode from actual value → ["ap-northeast-3","ap-south-1"]> -d rule_names=<encode from actual value → ["test-rule-1","test-rule-2"]> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule/test_config_rule_input -d account=test_account -d aws_iam_role=test_iam_role -d region=<encode from actual value → ["ap-northeast-3","ap-south-1"]> -d rule_names=<encode from actual value → ["test-rule-1","test-rule-2"]> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_config_rule/test_config_rule_input |
Description input¶
https://<host>:<mPort>splunk_ta_aws_aws_description
API for AWS Description inputs.
GET, POST, or DELETE
API for AWS Description inputs.
Request parameters
Name | Type | Description |
---|---|---|
name | Boolean true |
Name |
account | Boolean true |
AWS Account |
aws_iam_role | Boolean false |
Assume role |
regions | Boolean true |
AWS Regions |
apis | Boolean true |
APIs for the following information:ec2_volumes/3600,ec2_instances/3600,ec2_reserved_instances/3600,ebs_snapshots/3600,classic_load_balancers/3600,application_load_balancers/3600,vpcs/3600,vpc_network_acls/3600,cloudfront_distributions/3600,vpc_subnets/3600,rds_instances/3600,ec2_key_pairs/3600,ec2_security_groups/3600,ec2_images/3600,ec2_addresses/3600,lambda_functions/3600,s3_buckets/3600 |
sourcetype | Boolean true |
Sourcetype API for aws:description |
index | Boolean true |
Index |
Generic S3 input¶
Manage or configure Generic S3 inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3/<generic_s3_input_name>
GET, POST, or DELETE
API for the AWS S3 input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
host_name |
0 | - | The host name of the S3 service. |
aws_s3_region |
0 | - | AWS region that contains the bucket. |
bucket_name |
1 | - | AWS S3 bucket name. |
key_name |
0 | - | S3 key prefix. |
parse_csv_with_header |
0 | 0 | If enabled, all files will be parsed considering first line of each file as the header. Specify either 0 or 1. |
parse_csv_with_delimiter |
0 | , | Delimiter to consider while parsing csv files. |
initial_scan_datetime |
0 | - | Splunk relative time. Format = %Y-%m-%dT%H:%M:%SZ. |
terminal_scan_datetime |
0 | - | Only S3 keys which have been modified before this datetime will be considered. Format = %Y-%m-%dT%H:%M:%SZ. |
ct_blacklist |
0 | ^$ | Only valid if sourcetype is set to aws:cloudtrail. A PCRE regex that specifies events names to exclude. |
blacklist |
0 | - | Regex specifying S3 keys (folders) to ignore. |
whitelist |
0 | - | Regex specifying S3 keys (folders) to ignore. Overrides blacklist. |
ct_excluded_events_index |
0 | - | Name of index to put excluded events into. Keep empty to discard the events. |
max_retries |
0 | 3 | Max number of retry attempts to stream incomplete item. |
recursion_depth |
0 | -1 | Number specifying the depth of subfolders to scan. -1 specifies all subfolders (unconstrained). |
max_items |
0 | 100000 | Max trackable items. |
character_set |
0 | auto | The character encoding use in your S3 files. E.g. UTF-8. |
is_secure |
0 | - | Whether to use secure connection to AWS. |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
s3_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with s3 service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with sts service. |
polling_interval |
0 | 1800 | Data collection interval, in seconds. |
sourcetype |
1 | aws:s3 | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3 |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3/test_generic_s3_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3 -d name=test_generic_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d host_name=s3.ap-south-1.amazonaws.com -d aws_s3_region=ap-south-1 -d bucket_name=test-bucket -d key_name=TestData -d parse_csv_with_header=0 -d parse_csv_with_delimiter=<encode from actual value → ,> -d initial_scan_datetime=<encode from actual value → 2023-01-01T00:00:00Z> -d terminal_scan_datetime=<encode from actual value → 2023-01-10T00:00:00Z> -d ct_blacklist=<encode from actual value → ^$> -d blacklist=<encode from actual value → Test/.*> -d whitelist=<encode from actual value → Data/.*> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3/test_generic_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d host_name=s3.ap-south-1.amazonaws.com -d aws_s3_region=ap-south-1 -d bucket_name=test-bucket -d key_name=TestData -d parse_csv_with_header=0 -d parse_csv_with_delimiter=<encode from actual value → ,> -d initial_scan_datetime=<encode from actual value → 2023-01-01T00:00:00Z> -d terminal_scan_datetime=<encode from actual value → 2023-01-10T00:00:00Z> -d ct_blacklist=<encode from actual value → ^$> -d blacklist=<encode from actual value → Test/.*> -d whitelist=<encode from actual value → Data/.*> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default -d recursion_depth=2 -d max_retries=5 |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_s3/test_generic_s3_input |
Incremental S3 input¶
Manage or configure Incremental S3 inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs/<incremental_s3_input_name>
GET, POST, or DELETE
API for the AWS Incremental S3 input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
host_name |
0 | - | The host name of the S3 service. |
aws_s3_region |
0 | - | The AWS region that contains the S3 bucket. |
bucket_name |
1 | - | The AWS S3 bucket name. |
log_type |
1 | - | The type of logs to ingest. Available log types are cloudtrail, elb:accesslogs, cloudfront:accesslogs and s3:accesslogs. |
log_file_prefix |
0 | - | Configure the prefix of log file, which along with other path elements, forms the URL under which the addon searches the log files. |
log_start_date |
0 | - | The start date of the log. Format = %Y-%m-%d. |
bucket_region |
0 | - | The AWS region where the S3 bucket exists. |
distribution_id |
0 | - | CloudFront distribution id. Specify only when creating input for collecting CloudFront access logs. |
max_fails |
0 | 10000 | Stop discovering new keys if the number of failed files exceeded max_fails. |
max_number_of_process |
0 | 2 | Maximum number of processes. |
max_number_of_thread |
0 | 4 | Maximum number of threads. |
max_retries |
0 | -1 | Max number of retries to collect data upon failing requests. Specify -1 to retry until success. |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify 0 to disable, or 1 to enable. |
s3_private_endpoint_url |
1 if private_endpoint_enabled=1 |
- | Private endpoint url to connect with the S3 service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled=1 |
- | Private endpoint url to connect with the STS service. |
interval |
0 | 1800 | Data collection interval, in seconds. |
sourcetype |
0 | aws:s3 |
Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs/test_incremental_s3_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs -d name=test_incremental_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d host_name=s3.amazonaws.com -d aws_s3_region=ap-south-1 -d bucket_name=testing-bucket-05 -d log_type=<encode from actual value → s3:accesslogs> -d log_file_prefix=test-prefix -d log_start_date=2023-01-01 -d bucket_region=ap-south-1 -d interval=1800 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs/test_incremental_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d host_name=s3.amazonaws.com -d aws_s3_region=ap-south-1 -d bucket_name=testing-bucket-05 -d log_type=<encode from actual value → s3:accesslogs> -d log_file_prefix=test-prefix -d log_start_date=2023-01-01 -d bucket_region=ap-south-1 -d interval=1800 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_logs/test_incremental_s3_input |
Inspector input¶
Manage or configure Inspector inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector/<inspector_input_name>
GET, POST, or DELETE
API for the Amazon Inspector input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
regions |
1 | - | AWS regions that contain your data. Enter region IDs in a comma-separated list. |
polling_interval |
1 | 300 | Data collection interval, in seconds. |
sourcetype |
1 | aws:inspector |
Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector/test_inspector_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector -d name=test_inspector_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-2> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector/test_inspector_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-2> -d polling_interval=600 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector/test_inspector_input |
Inspector V2 input¶
Manage or configure Inspector V2 inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2/<inspector_v2_input_name>
GET, POST, or DELETE
API for the Amazon Inspector V2 input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
regions |
1 | - | AWS regions that contain your data. Enter region IDs in a comma-separated list. |
polling_interval |
1 | 300 | Data collection interval, in seconds. |
sourcetype |
1 | aws:inspector:v2:findings |
Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2 |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2/test_inspector_v2_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2 -d name=test_inspector_v2_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-2> -d polling_interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2/test_inspector_v2_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-2> -d polling_interval=600 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_inspector_v2/test_inspector_v2_input |
Kinesis input¶
Manage or configure Kinesis inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis/<kinesis_input_name>
GET, POST, or DELETE
API for the AWS Kinesis input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
region |
1 | - | AWS region for Kinesis stream. |
stream_names |
1 | - | Kinesis stream names in a comma-separated list. Leave empty to collect all streams. |
init_stream_position |
0 | LATEST | Stream position from where to start collecting data. Specify either TRIM_HORIZON (starting) or LATEST (recent live data). |
encoding |
0 | - | Encoding of stream data. Set to gzip or leave blank, which defaults to Base64 . |
format |
0 | - | Format of the collected data. Specify CloudWatchLogs or leave empty. |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify 0 to disable, or 1 to enable. |
kinesis_private_endpoint_url |
1 if private_endpoint_enabled=1 |
- | Private endpoint url to connect with the Kinesis service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled=1 |
- | Private endpoint url to connect with the STS service. |
sourcetype |
0 | aws:kinesis |
Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
metric_index_flag |
0 | No | Whether to use metric index or event index. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis/test_kinesis_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis -d name=test_kinesis_input -d account=test_account -d aws_iam_role=test_iam_role -d region=ap-south-1 -d stream_names=test-stream -d init_stream_position=LATEST -d encoding=gzip -d format=CloudwatchLogs -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis/test_kinesis_input -d account=test_account -d aws_iam_role=test_iam_role -d region=ap-south-1 -d stream_names=test-stream -d init_stream_position=TRIM_HORIZON -d encoding=gzip -d format=CloudwatchLogs -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_kinesis/test_kinesis_input |
Metadata input¶
Manage or configure Metadata inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata/<metadata_input_name>
GET, POST, or DELETE
API for the AWS Metadata input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
regions |
1 | - | AWS regions from where to get data, split by ‘,’. |
apis |
1 | - | APIs to collect data with, and intervals for each API, in the format of ec2_instances/3600 , kinesis_stream/3600 . |
sourcetype |
0 | aws:metadata |
Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata/test_metadata_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata -d name=test_metadata_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-3> -d apis=<encode from actual value → ec2_instanes/3600, lambda_functions/3600, s3_buckets/3600> -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata/test_metadata_input -d account=test_account -d aws_iam_role=test_iam_role -d regions=<encode from actual value → ap-northeast-1,ap-south-1,ap-northeast-3> -d apis=<encode from actual value → ec2_instanes/3600, lambda_functions/3600, s3_buckets/3600> -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_metadata/test_metadata_input |
SQS input¶
Manage or configure SQS inputs in the add-on.
API Endpoints
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs/<sqs_input_name>
GET, POST, or DELETE
API for the AWS SQS input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
aws_region |
0 | - | List of AWS regions containing SQS queues. |
sqs_queues |
1 | - | AWS SQS queue names list, split by “,”. |
interval |
1 | 30 | Data collection interval. |
sourcetype |
0 | - | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs/test_sqs_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs -d name=test_sqs_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_region=<encode from actual value → ["ap-south-1","ap-northeast-1"]> -d sqs_queues=<encode from actual value → ["test-queue-1,test-queue-2","test-queue-3,test-queue-4"]> -d interval=30 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs/test_sqs_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d aws_region=<encode from actual value → ["ap-south-1","ap-northeast-1"]> -d sqs_queues=<encode from actual value → ["test-queue-1,test-queue-2","test-queue-3,test-queue-4"]> -d interval=30 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_splunk_ta_aws_sqs/test_sqs_input |
SQS-based S3 input¶
Manage or configure SQS-based S3 inputs in the add-on.
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3
https://<host>:<mgmt_port>/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3/<sqs_based_s3_input_name>
GET, POST, or DELETE
API for the AWS SQS-based S3 input.
Request URL parameters
Parameter | Default | Description |
---|---|---|
output_mode | - | If output_mode=json , response is returned in JSON format. |
Request body parameters
Parameter | Required | Default value | Description |
---|---|---|---|
name |
1 | - | Unique name for input. |
aws_account |
1 | - | AWS account name. |
aws_iam_role |
0 | - | AWS IAM role. |
using_dlq |
0 | 1 | Specify either 0 or 1 to disable or enable checking for dead letter queue (DLQ). |
sqs_sns_validation |
0 | 1 | Enable or disable SNS signature validation. Specify either 0 or 1. |
parse_csv_with_header |
0 | 0 | Enable parsing of CSV data with header. First line of file will be considered as header. Specify either 0 or 1. |
parse_csv_with_delimiter |
0 | , | Enable parsing of CSV data by chosen delimiter. Specify delimiter for parsing csv file. |
sqs_queue_region |
1 | - | Name of the AWS region in which the notification queue is located. |
sqs_queue_url |
1 | - | Name of SQS queue to which notifications of S3 file(s) creation are sent. |
sqs_batch_size |
0 | 10 | Max number of messages to pull from SQS in one batch. |
s3_file_decoder |
1 | - | Name of a decoder which decodes files into events: CloudTrail, Config, S3 Access Logs, ELB Access Logs, CloudFront Access Logs, and CustomLogs. |
private_endpoint_enabled |
0 | - | Whether to use private endpoint. Specify either 0 or 1. |
sqs_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with SQS service. |
s3_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with s3 service. |
sts_private_endpoint_url |
1 if private_endpoint_enabled =1 |
- | Private endpoint url to connect with STS service. |
interval |
0 | 300 | Data collection interval. |
sourcetype |
1 | - | Sourcetype of collected data. |
index |
1 | default | Splunk index to ingest data. Default is main. |
metric_index_flag |
0 | No | Whether to use metric index or event index. |
Examples
GET | List of all inputs | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3 |
List specified input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3/test_sqs_based_s3_input |
|
POST | Create input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3 -d name=test_sqs_based_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d using_dlq=1 -d sqs_sns_validation=1 -d parse_csv_with_header=1 -d parse_csv_with_delimiter=<encode from actual value → ,> -d sqs_queue_region=ap-south-1 -d sqs_queue_url=<encode from actual value → https://sqs.ap-south-1.amazonaws.com/123456789012/test-queue > -d sqs_batch_size=10 -d s3_file_decoder=CustomLogs -d interval=300 -d sourcetype=test_sourcetype -d index=default |
Edit input | curl -u admin:password https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3/test_sqs_based_s3_input -d aws_account=test_account -d aws_iam_role=test_iam_role -d using_dlq=1 -d sqs_sns_validation=1 -d parse_csv_with_header=1 -d parse_csv_with_delimiter=<encode from actual value → |> -d sqs_queue_region=ap-south-1 -d sqs_queue_url=<encode from actual value → https://sqs.ap-south-1.amazonaws.com/123456789012/test-queue > -d sqs_batch_size=10 -d s3_file_decoder=Config -d interval=300 -d sourcetype=test_sourcetype -d index=default |
|
DELETE | Delete input | curl -u admin:password -X DELETE https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/splunk_ta_aws_aws_sqs_based_s3/test_sqs_based_s3_input |
Lookups for the Splunk Add-on for AWS¶
Lookup files are located in
$SPLUNK_HOME/etc/apps/Splunk_TA_aws/lookups
on *nix systems and
%SPLUNK_HOME%\etc\apps\Splunk_TA_aws\lookups
on Windows systems.
Lookup files map fields from Amazon Web Services (AWS) to CIM-compliant
values in the Splunk platform. The Splunk Add-on for AWS has the
following lookups:
Lookup name | Purpose |
---|---|
aws_config_action_lookup_741.csv | Maps the status field to a CIM-compliant value for the action field. |
aws_config_object_category_lookup_741.csv | Sorts the various AWS Config object categories into CIM-compliant values for the object_category field. |
aws_cloudtrail_action_status_741.csv | Maps the eventName and errorCode fields to CIM-compliant values for action and status . |
aws_cloudtrail_changetype_741.csv | Maps the eventSource to a CIM-compliant value for the change_type field. |
aws_health_error_type_741.csv | Maps ErrorCode to ErrorDetail , ErrorCode , ErrorDetail . |
aws_log_sourcetype_modinput_741.csv | Maps sourcetype to modinput . |
cloudfront_edge_location_lookup_741.csv | Maps the x_edge_location value to a human-readable edge_location_name . |
aws_vendor_product_aws_cloudtrail_741.csv | Defines CIM-compliant values for the vendor , product , and appfields based on the source type. |
aws_vpcflow_action_lookup_741.csv | Maps the vpcflow_action field to a CIM-compliant action field. |
aws_network_traffic_protocol_code_lookup_760.csv | Maps the numerical protocol code to a CIM-compliant protocol , transport fields and a human-readable field protocol_full_name . |
aws_vm_size_to_resources_741.csv | Maps the instance_type field to CIM-compliant cpu_cores , mem_capacity fields. |
aws_cloudwatch_guardduty_category_750.csv | Defines the value for CIM field category based on subject of the event. |
aws_network_traffic_tcp_flags_760.csv | Maps the numeric value of tcp flag to pre-defined values of field tcp_flag . |
Saved searches for the Splunk Add-on for AWS¶
To enable or disable a saved search, follow these steps:
- From the Settings menu, choose Searches, reports, and alerts.
- Locate the saved search by filtering the list or entering the name of the saved search in the filter field to search for it.
- Under the Actions column of the saved search list, select Edit > Enable/Disable to enable or disable the saved search.
Saved searches cannot be scheduled using a free license.
The “Addon Metadata - Summarize AWS Inputs” saved search is disabled by
default, but you must enable this saved search in order to aggregate
inputs and accounts data in the summary
index.
The Splunk Add-on for AWS includes the following saved searches:
Name |
Search |
---|---|
AWS Bill - Monthly Latest Snapshot |
|
AWS Bill - Detailed Cost Latest Snapshot |
|
AWS Bill - Total Cost until Now |
|
AWS Bill - Total Cost until Now by Service |
|
AWS Bill - Total Cost until Now by Linked Account |
|
AWS Bill - Monthly Cost |
|
AWS Bill - Monthly Cost by Service |
|
AWS Bill - Monthly Cost by Linked Account |
|
AWS Bill - Current Month Cost until Now |
|
AWS Bill - Current Month Cost until Now by Service |
|
AWS Bill - Current Month Cost until Now by Linked Account |
|
AWS Bill - Daily Cost through Last Month - Blended |
|
AWS Bill - Daily Cost through Last Month by Service - Blended |
|
AWS Bill - Daily Cost through Last Month by Linked Account - Blended |
|
AWS Bill - Total Cost through Last Month by Region - Blended |
|
AWS Bill - Monthly Cost through Last Month by Region - Blended |
|
AWS Bill - Daily Cost through Last Month by Region - Blended |
|
AWS Bill - Total Daytime Cost through Last Month - Blended |
|
AWS Bill - Total Nighttime Cost through Last Month - Blended |
|
AWS Bill - Daily Cost through Last Month - Unblended |
|
AWS Bill - Total Cost through Last Month by Region - Unblended |
|
AWS Bill - Daily Cost through Last Month by Service - Unblended |
|
AWS Bill - Daily Cost through Last Month by Linked Account - Unblended |
|
AWS Bill - Monthly Cost through Last Month by Region - Unblended |
|
AWS Bill - Daily Cost through Last Month by Region - Unblended |
|
AWS Bill - Total Daytime Cost through Last Month - Unblended |
|
AWS Bill - Total Nighttime Cost through Last Month - Unblended |
|
Addon Metadata - Migrate AWS Accounts |
|
Addon Metadata - Summarize AWS Inputs |
|
AWS Health Check Dashboards¶
The Health Check Dashboards in Splunk Add-on for AWS lets monitor deployment performance while making it easier for users troubleshoot and mitigate issues faster. It provides the following insights from your AWS add-on configuration and deployment.
Dashboard | Panels | Description |
---|---|---|
Health Overview (Provides information for all the errors and warnings generated from the inputs configured in AWS add-on) | Error count by categories | Displays the count of errors by categories like configuration error, network error, etc. Error count panels contain drilldowns which redirect to the Error Details dashboard containing information on possible reasons and resolutions for the errors. Thus, clicking on the error count will redirect to the Error Details dashboard, from where the user can identify and mitigate issues faster. |
Warning count | Displays the count of warning messages. The warning count panel contains drilldowns which redirect to the Warning Details dashboard containing information on possible reasons and resolutions for the warnings. Thus, clicking on the warning count will redirect to the Warning Details dashboard, from where the user can identify and eradicate unnecessary warnings. | |
Error count timechart | These timecharts display the count of errors over time based on hosts, input types, input names, and error categories. | |
Resource Utilization (Provides information regarding the resource utilization by different types of inputs configured in the AWS add-on) | CPU and Memory utilization | Displays the CPU and memory utilization over time for single instance and multi instance inputs configured in the AWS add-on (single instance inputs are the inputs where Splunk spawns a single process for all inputs, whereas multi instance inputs are the inputs where Splunk spawns individual process for each input). This can be useful to identify over-utilization of resources which may affect your Splunk platform environment. |
Inputs count (single instance and multi instance) | Displays the number of inputs (enabled/disabled) configured in the AWS add-on. Number of inputs help to identify resource utilization, and can be scaled up or down, based on the requirements. | |
KV Store calls count | Displays the number of key value store calls over time. This is useful to examine the load on Splunk KV store as some of the inputs in the AWS add-on use KV store-based checkpointing mechanism. The KV store panel contains a drilldown to KV store Utilization dashboard which lists the KV store calls count by collection name and KV store call method (GET/POST/DELETE). Thus, clicking on the KV store calls count panel within a particular time range will redirect to the KV Store Utilization dashboard for that time range, where the load can be analyzed based on collections used by AWS add-on, when compared to collections used by other apps and add-ons. Clicking on any collection name under the AWS add-on will display the timechart for average time taken by KV store calls on that particular collection. | |
S3 Inputs Health Details (Focuses on the Generic S3, Incremental S3, and SQS-based S3 input types) | Time lapse (delay) and throughput | Displays the delay (time taken) in fetching the data and throughput (size of data) over time. Useful to identify network latency or delay related issues. |
Error Message Details | Displays the error details encountered while input execution along with possible reasons and resolutions. |
In the Splunk Web UI, open the Splunk Add-on for AWS, and click on the Health Check tab. Select the dashboard from the dropdown which you want to monitor.
Performance reference for the Splunk Add-on for AWS data inputs¶
Many factors impact throughput performance. The rate at which the Splunk Add-on for AWS ingests input data varies depending on a number of variables: deployment topology, number of keys in a bucket, file size, file compression format, number of events in a file, event size, and hardware and networking conditions.
This section provides measured throughput data achieved under certain operating conditions and draws from the performance testing results some rough conclusions and guidelines on tuning AWS add-on throughput performance. Use the information here as a basis for estimating and optimizing the AWS add-on throughput performance in your own production environment. As performance varies based on user characteristics, application usage, server configurations, and other factors, specific performance results cannot be guaranteed. Consult Splunk Support for accurate performance tuning and sizing.
Reference hardware and software environment¶
The throughput data and conclusions provided here are based on performance testing using Splunk platform instances (dedicated heavy forwarders and indexers) running on the following environment:
Instance type | M4 Double Extra Large (m4.4xlarge) |
Memory | 64 GB |
Compute Units (ECU) | 53.5 |
vCPU | 16 |
Storage (GB) | 0 (EBS only) |
Arch | 64-bit |
EBS Optimized (Max Bandwidth) | 2000 Mbps |
Network performance | High |
The following settings are configured in outputs.conf
on the heavy forwarder:
useACK = true
maxQueueSize = 15MB
Measured performance data¶
The throughput data provided here is the maximum performance for each single input achieved in performance testing under specific operating conditions and is subject to change when any of the hardware and software variables changes. Use this data as a rough reference only.
Single-input max throughput¶
Data Input | Sourcetype | Max Throughput (KBs) | Max EPS (events) | Max Throughput (GB/day) |
---|---|---|---|---|
Generic S3 | aws:elb:accesslogs(plain text, syslog, event size 250B, S3 key size 2MB) | 17,000 | 86,000 | 1,470 |
Generic S3 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 11,000 | 35,000 | 950 |
Incremental S3 | aws:elb:accesslogs(plain text, syslog, event size 250B, S3 key size 2MB) | 11,000 | 43,000 | 950 |
Incremental S3 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 7,000 | 10,000 | 600 |
SQS-based S3 | aws:elb:accesslogs (plain text, syslog, event size 250B, S3 key size 2MB) | 12,000 | 50,000 | 1,000 |
SQS-based S3 | aws:elb:accesslogs (gz, syslog, event size 250B, S3 key size 2MB) | 24,000 | 100,000 | 2,000 |
SQS-based S3 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 13,000 | 19,000 | 1,100 |
CloudWatch logs 1 | aws:cloudwatchlog:vpcflow | 1,000 | 6,700 | 100 |
CloudWatch (ListMetric, 10,000 metrics) | aws:cloudwatch | 240 (Metricss) | NA | NA |
CloudTrail | aws:cloudtrail (gz, json, sqs=1000, 9K events/key) | 5,000 | 7,000 | 400 |
Kinesis | aws:cloudwatchlog:vpcflow (json, 10 shards) | 15,000 | 125,000 | 1,200 |
SQS | aws:sqs (json, event size 2.8K) | N/A | 160 | N/A |
Multi-inputs max throughput¶
The following throughput data was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment.
Configuring more AWS accounts increases CPU usage and lowers throughput performance due to increased API calls. It is recommended that you consolidate AWS accounts when configuring the Splunk Add-on for AWS.
Data Input | Sourcetype | Max Throughput (KBs) | Max EPS (events) | Max Throughput (GB/day) |
---|---|---|---|---|
Generic S3 | aws:elb:accesslogs(plain text, syslog, event size 250B, S3 key size 2MB) | 23,000 | 108,000 | 1,980 |
Generic S3 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 45,000 | 130,000 | 3,880 |
Incremental S3 | aws:elb:accesslogs(plain text, syslog, event size 250B, S3 key size 2MB) | 34,000 | 140,000 | 2,930 |
Incremental S3 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 45,000 | 65,000 | 3,880 |
SQS-based S31 | aws:elb:accesslogs (plain text, syslog, event size 250B, S3 key size 2MB) | 35,000 | 144,000 | 3,000 |
SQS-based S31 | aws:elb:accesslogs (gz, syslog, event size 250B, S3 key size 2MB) | 42,000 | 190,000 | 3,600 |
SQS-based S31 | aws:cloudtrail (gz, json, event size 720B, S3 key size 2MB) | 45,000 | 68,000 | 3,900 |
CloudWatch logs | aws:cloudwatchlog:vpcflow | 1,000 | 6,700 | 100 |
CloudWatch (ListMetric) | aws:cloudwatch (10,000 metrics) | 240 (metrics/s) | NA | NA |
CloudTrail | aws:cloudtrail (gz, json, sqs=100, 9K events/key) | 20,000 | 15,000 | 1,700 |
Kinesis | aws:cloudwatchlog:vpcflow (json, 10 shards) | 18,000 | 154,000 | 1,500 |
SQS | aws:sqs (json, event size 2.8K) | N/A | 670 | N/A |
Max inputs benchmark per heavy forwarder¶
The following input number ceiling was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment, where CPU and memory resources were utilized to their fullest.
If you have a smaller event size, fewer keys per bucket, or more available CPU and memory resources in your environment, you can configure more inputs than the maximum input number indicated in the table.
Data Input | Sourcetype | Format | Number of Keys/Bucket | Event Size | Max Inputs |
---|---|---|---|---|---|
S3 | aws:s3 | zip, syslog | 100K | 100B | 300 |
S3 | aws:cloudtrail | gz, json | 1,300K | 1KB | 30 |
Incremental S3 | aws:cloudtrail | gz, json | 1,300K | 1KB | 20 |
SQS-based S3 | aws:cloudtrail, aws:config | gz, json | 1,000K | 1KB | 50 |
Memory usage benchmark for generic S3 inputs¶
Event Size | Number of Events per Key | Total Number of Keys | Archive Type | Number of Inputs | Memory Used |
---|---|---|---|---|---|
1K | 1,000 | 10,000 | zip | 20 | 20G |
1K | 1,000 | 1,000 | zip | 20 | 12G |
1K | 1,000 | 10,000 | zip | 10 | 18G |
100B | 1,000 | 10,000 | zip | 10 | 15G |
Performance tuning and sizing guidelines¶
If you do not achieve the expected AWS data ingestion throughput, follow these steps to tune the throughput performance:
-
Identify the bottleneck in your system that prevents it from achieving a higher level of throughput performance. The bottleneck in AWS data ingestion may lie in one of the following components:
- The Splunk Add-on for AWS: its capacity to pull in AWS data through API calls
- Heavy forwarder: its capacity to parse and forward data to the indexer tier, which involves the throughput of the parsing, merging, and typing pipelines
- Indexer: the index pipeline throughput To troubleshoot the indexing performance on the heavy forwarder and indexer, refer to Troubleshooting indexing performance in the Capacity Planning Manual. A chain is as only as strong as its weakest link. The capacity of the bottleneck is the capacity of the entire system as a whole. Only by identifying and tuning the performance of the bottleneck component can you improve the overall system performance.
-
Tune the performance of the bottleneck component. If the bottleneck lies in heavy forwarders or indexers, refer to the Summary of performance recommendations in the Capacity Planning Manual. If the bottleneck lies in the Splunk Add-on for AWS, adjust the following key factors that usually impact the AWS data input throughput:
- Parallelization settings
To achieve optimal throughput performance, you can set the
parallelIngestionPipelines
value to 2 inserver.conf
if your resource capacity permits. For information aboutparallelIngestionPipelines
, see Parallelization settings in the Splunk Enterprise Capacity Planning Manual. - AWS data inputs When there is no shortage of resources, adding more inputs in the add-on increases throughput but it also consumes more memory and CPU. Increase the number of inputs to improve throughput until memory or CPU is running short. If you are using SQS-based S3 inputs, you can horizontally scale out data collection by configuring more inputs on multiple heavy forwarders to consume messages from the same SQS queue.
- Number of keys in a bucket For both the Generic S3 and Incremental S3 inputs, the number of keys (or objects) in a bucket is a factor that impacts initial data collection performance. The first time a Generic or Incremental S3 input collects data from a bucket, the more keys the bucket contains, the longer time it takes to complete the list operation, and the more memory is consumed. A large number of keys in a bucket require a huge amount of memory for S3 inputs in the initial data collection and limit the number of inputs you can configure in the add-on. If applicable, you can use log file prefix to subset keys in a bucket into smaller groups and configure different inputs to ingest them separately. For information about how to configure inputs to use log file prefix, see Add an S3 input for Splunk Add-on for AWS. For SQS-based S3 inputs, the number of keys in a bucket is not a primary factor since data collection can be horizontally scaled out based on messages consumed from the same SQS queue.
- File format Compressed files consume much more memory than plain text files.
- Parallelization settings
To achieve optimal throughput performance, you can set the
-
When you have resolved the bottleneck, see if the improved performance meets your requirements. If not, continue the previous steps to identify the next bottleneck in the system and address it until the expected overall throughput performance is achieved.
Performance reference for the Kinesis input in the Splunk Add-on for AWS¶
This page provides the reference information about Splunk’s performance testing of the Kinesis input in Splunk Add-on for AWS. The testing was performed on version 4.0.0, when the Kinesis input was first introduced. Use this information to enhance the performance of your own Kinesis data collection tasks.
Many factors impact performance results, including file size, file compression, event size, deployment architecture, and hardware. These results represent reference information and do not represent performance in all environments.
Summary¶
While results in different environments will vary, Splunk’s performance testing of the Kinesis input showed the following:
- Each Kinesis input can handle up to 6 MB/s of data, with a daily ingestion volume of 500 GB.
- More shards can slightly improve the performance. Three shards are recommended for large streams.
Testing architecture¶
Splunk tested the performance of the Kinesis input using a single-instance Splunk Enterprise 6.4.0 on an m4.4xlarge AWS EC2 instance to ensure CPU, memory, storage, and network did not introduce any bottlenecks. See the following instance specs:
Instance type | M4 Quadruple Extra Large (m4.4xlarge) |
---|---|
Memory | 64 GB |
ECU | 53.5 |
Cores | 16 |
Storage | 0 GB (EBS only) |
Architecture | 64-bit |
Network performance | High |
EBS Optimized: Max Bandwidth | 250 MB/s |
Test scenario¶
Splunk tested the following parameters to target the use case of high-volume VPC flow logs ingested through a Kinesis stream:
- Shard numbers: 3, 5, and 10 shards
- Event size: 120 bytes per event
- Number of events: 20,000,000
- Compression: gzip
- Initial stream position: TRIM_HORIZON
AWS reports that each shard is limited to 5 read transactions per second, up to a maximum read rate of 2MB per second. Thus, with 10 shards, the theoretical upper limit is 20 MB per second.
Test results¶
Splunk observed a data ingestion rate of 6 million events per minute at peak, which is 100,000 events per second. Because each event is 120 bytes, this indicates a maximum throughput of 10 MB/s.
Splunk observed an average throughput of 6 MB/s for a single Kinesis modular input, or a daily ingestion throughput of approximately 500 GB.
After reducing the shard number from 10 shards to 3 shards, Splunk observed a throughput downgrade of approximately 10%.
During testing, Splunk observed the following resource usage on the instance:
- Normalized CPU usage of approximately 30%
- Python memory usage of approximately 700 MB
The indexer is the largest consumer of CPU, and the modular input is the largest consumer of memory.
AWS throws a ProvisionedThroughputExceededException if a call returns 10 MB of data and subsequent calls are made within the next 5 seconds. Splunk observed this error while testing with three shards only every one to five minutes.
Ended: Reference
Release Notes ↵
Release notes for the Splunk Add-on for AWS¶
Version 7.8.0 of the Splunk Add-on for Amazon Web Services was released on November 26, 2024.
Billing (Legacy) input is deprecated in add-on version 7.6.0. Please configure Billing (Cost and Usage Report) inputs to collect billing data.
The file based checkpoint mechanism was migrated to the Splunk KV Store for below mentioned inputs in the specific versions. The inputs must be disabled whenever the Splunk software is restarted. Otherwise, it will result in data duplication against your already configured inputs. Input disablement is not applicable to the Kinesis inputs. Version 7.1.0
- Billing Cost and Usage Report
- CloudWatch Metrics
- Incremental S3
- Inspector
- InspectorV2
- Config Rules
- Cloudwatch Logs
- Kinesis
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.8.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 9.1.x, 9.2.x, 9.3.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API, S3 Event Notifications using EventBridge), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Transit Gateway Flow Logs, Billing Cost and Usage Report, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.8.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Provided support for data collection of multiple CSV files inside ZIP compression in Generic S3 input.
- Provided support for input creation, listing, and deletion in CLI tool for all the inputs.
- Provided support for parsing Amazon S3 Event Notifications using Amazon EventBridge in SQS-Based S3 input.
- Provided support for single space delimited files data collection in SQS-Based S3 and Generic S3 inputs.
- Enhanced the data pulling logic in Inspector (v2) input. Findings with “Resolved” state will also be collected.
- For SQS-Based S3 input, enhanced the SNS signature validation and deprecated the
SNS message max age
parameter.
Fixed issues¶
Version 7.8.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.8.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.8.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Release history for the Splunk Add-on for AWS¶
Latest release¶
The latest version of the Splunk Add-on for Amazon Web Services is version 7.7.1. See Release notes for the Splunk Add-on for AWS for the release notes of this latest version.
Version 7.7.1¶
Compatibility¶
Version 7.7.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 9.1.x, 9.2.x, 9.3.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Transit Gateway Flow Logs, Billing Cost and Usage Report, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.7.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- Fixed the OS compatibility issue for the Security Lake input.
Fixed issues¶
Version 7.7.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.7.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.7.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.7.0¶
Compatibility¶
Version 7.7.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 9.1.x, 9.2.x, 9.3.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Transit Gateway Flow Logs, Billing Cost and Usage Report, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.7.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Provided support for metric index in the VPC Flow Logs input.
- The fields
bytes
andpackets
are considered as metric measures and rest of the fields as dimensions. - The events having
log_status
field asNODATA
orSKIPDATA
will not be ingested in the metric index as they don’t contain any values for metric measures.
- The fields
- Enhanced the time window calculation mechanism for CloudWatch Logs input data collection. Added
Query Window Size
parameter in the UI. For more information, see CloudWatch Log inputs. - Fixed the data miss issue in the Generic S3 input.
- Provided compatibility for IPv6.
- Security fixes.
Fixed issues¶
Version 7.7.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.7.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.7.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.6.0¶
Compatibility¶
Version 7.6.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x, 9.1.x, 9.2.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Transit Gateway Flow Logs, Billing Cost and Usage Report, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.6.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Security Lake input now supports OCSF schema v1.1 for AWS log sources Version 2. For more information, see the OCSF source identification section of the Open Cybersecurity Schema Framework (OCSF) topic in the AWS documentation.
- Deprecated Billing (Legacy) input. Configure Billing (Cost and Usage Report) input to collect billing data. For more information, see Cost and Usage Report inputs.
- Provided support of Transit Gateway Flow Logs.
- The default format of logs in text file format is supported.
- SQS-based-S3 input type is supported in the pull-based mechanim.
- Transit Gateway Flow Logs can also be collected through push-based mechanism.
- Enhanced the input execution of CloudWatch input to improve the performance. Added
CloudWatch Max Threads
parameter inConfiguration > Add-on Global Settings
page. For more information, see Add-on Global Settings. - CIM fields
protocol
,transport
, andprotocol_full_name
updated as per CIM best practices. This change will impact below-mentioned sourcetypes:- aws:cloudwatchlogs:vpcflow
- aws:cloudtrail
- aws:transitgateway:flowlogs
- Security fixes.
- Minor bug fixes.
Fixed issues¶
Version 7.6.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.6.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.6.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.5.1¶
Compatibility¶
Version 7.5.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x, 9.1.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.5.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- Provided support for Python 3.9.
- Minor UI Improvements for a more streamlined upgrade experience.
Fixed issues¶
Version 7.5.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.5.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.5.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.5.0¶
Version 7.5.0 of the Splunk Add-on for Amazon Web Services was released on April 2, 2024.
Compatibility¶
Version 7.5.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x, 9.1.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.5.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Provided support of Cloudtrail Lake Input.
- Provided support of Assume role in Cloudwatch Logs Input.
-
Enhanced the CIM support for
aws:cloudwatch:guardduty
andaws:cloudwatchlogs:guardduty
sourcetypes.- The CIM field
mitre_technique_id
was removed in this release because:- The existing values were found inaccurate and misleading.
- The vendor does not provide the mitre technique IDs in the GuardDuty events.
- The fields
src_type
anddest_type
were corrected or added for all the GuardDuty events. - The
transport
field got corrected for a few events. - The
signature
andsignature_id
field got corrected for the GuardDuty events. - The
src
field was added or corrected for certain events. -
See the following table to review the data model information based on the
service.actionType
field in your GuardDuty events.Alerts Data Model Intrusion Detection Data Model AWS_API_CALL
,KUBERNETES_API_CALL
,RDS_LOGIN_ATTEMPT
ORservice.actionType
isnull
.NETWORK_CONNECTION
,DNS_REQUEST
,PORT_PROBE
- The CIM field
-
Enhanced data collection for Metadata Input. Previously, if input is configured with multiple regions then input was collecting the global services data from every configured region which was causing the data duplication in the single input execution. In this version, this data duplication for global services is removed by collecting data from any one region if multiple regions are configured in the same input.
- Minor bug fixes.
Fixed issues¶
Version 7.5.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.5.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.5.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.4.1¶
Version 7.4.1 of the Splunk Add-on for Amazon Web Services was released on February 21, 2024.
Starting in version 7.1.0 of the Splunk Add-on for AWS, the file based checkpoint mechanism was migrated to the Splunk KV Store for Billing Cost and Usage Report, CloudWatch Metrics, and Incremental S3 inputs. The inputs must be disabled whenever the Splunk software is restarted. Otherwise, it will result in data duplication against your already configured inputs.
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.4.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x, 9.1.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.4.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- Fixed an API loading issue for the Metadata Input.
Fixed issues¶
Version 7.4.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.4.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.4.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.4.0¶
Version 7.4.0 of the Splunk Add-on for Amazon Web Services was released on December 21, 2023.
Starting in version 7.1.0 of the Splunk Add-on for AWS, the file based checkpoint mechanism was migrated to the Splunk KV Store for Billing Cost and Usage Report, CloudWatch Metrics, and Incremental S3 inputs. The inputs must be disabled whenever the Splunk software is restarted. Otherwise, it will result in data duplication against your already configured inputs.
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.4.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.4.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Added decoding support for parsing Kinesis Firehose error data. This significant improvement eliminates the necessity of employing a Lambda function for decoding purposes. Consequently, the use of Lambda for the re-ingestion of failed events from S3 is no longer required, streamlining the process and reducing complexity.
- Enhanced UI experience features.
- Minor bug fixes.
Fixed issues¶
Version 7.4.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.4.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.4.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.3.0¶
Version 7.3.0 of the Splunk Add-on for Amazon Web Services was released on November 10, 2023.
Starting in version 7.1.0 of the Splunk Add-on for AWS, the file based checkpoint mechanism was migrated to the Splunk KV Store for Billing Cost and Usage Report, CloudWatch Metrics, and Incremental S3 inputs. The inputs must be disabled whenever the Splunk software is restarted. Otherwise, it will result in data duplication against your already configured inputs.
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.3.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.3.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- The file checkpoint mechanism was migrated to the Splunk KV store for Inspector, InspectorV2, ConfigRule, CloudwatchLogs and Kinesis inputs.
- Updated existing Health Check Dashboards in order to enhance troubleshooting and performance monitoring. See AWS Health Check Dashboards for more information.
Fixed issues¶
Version 7.3.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.3.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.3.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.2.0¶
Version 7.2.0 of the Splunk Add-on for Amazon Web Services was released on October 17, 2023.
Compatibility¶
Version 7.2.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.2.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Removed the AWS Lambda function dependency for the
aws:cloudwatchlogs:vpcflow
sourcetype. - Added support to fetch logs from AWS Organization level directory structures using the CloudTrail Incremental S3 input.
- Enhanced throttle support for the Metadata input in order to mitigate throttling and limit errors.
- Added the SNS message Max Age parameter to the SQS-based S3 input. This can be used to improve the efficiency of your data collection of messages within specified age limits.
Fixed issues¶
Version 7.2.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.2.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.2.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.1.0¶
Version 7.1.0 of the Splunk Add-on for Amazon Web Services was released on July 25, 2023.
Starting in version 7.1.0 of the Splunk Add-on for AWS, the file based checkpoint mechanism was migrated to the Splunk KV Store for Billing Cost and Usage Report, CloudWatch Metrics, and Incremental S3 inputs. The inputs must be disabled whenever the Splunk software is restarted. Otherwise, it will result in data duplication against your already configured inputs.
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.1.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.1.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Added support for the following services for the AWS metadata input:
- EKS
- ElasticCache
- EMR
- GuardDuty
- Network Firewall
- Route 53
- WAF
- WAF v2
- Enhancements have been made to the AWS metadata input for below
mentioned services:
- CloudFront
- EC2
- ELB
- IAM
- Kinesis Data Firehose
- VPC
- The file checkpoint mechanism was migrated to the Splunk KV store for Billing Cost and Usage Report, CloudWatch Metrics, and Incremental S3 inputs.
Fixed issues¶
Version 7.1.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.1.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.1.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 7.0.0¶
Version 7.0.0 of the Splunk Add-on for Amazon Web Services was released on May 18th, 2023.
Version 7.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Security Lake. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Security Lake as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Security Lake before upgrading the Splunk Add-on for AWS to version 7.0.0 or later in order to avoid any data duplication and discrepancy issues.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into your Splunk platform deployment.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 7.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, EventBridge (CloudWatch API), Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, AWS Security Hub findings, and Amazon Security Lake events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 7.0.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Data input support for the Amazon Security Lake service. Users will now be able to ingest security events from Amazon Security Lake, normalized to the Open Cybersecurity Schema Framework (OCSF) schema. The Amazon Security Lake service makes AWS security events available as multi-event Apache Parquet objects in an S3 bucket. Each object has a corresponding SQS notification, once ready for download. Open Cybersecurity Schema Framework (OCSF) is an open-source project, delivering an extensible framework for developing schemas, along with a vendor-agnostic core security schema.
Fixed issues¶
Version 7.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 7.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 7.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.4.0¶
Version 6.4.0 of the Splunk Add-on for Amazon Web Services was released on April 19th, 2023.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.4.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.1.x, 8.2.x, 9.0.x |
CIM | 5.1.1 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.4.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Enhanced CIM support of
aws:securityhub:findings
source type in order to support the new event format. (Consolidated controls feature) - Fixed CIM extractions for the app and user fields and added
extractions for user_name in
aws:securityhub:findings
source type.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.4.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.4.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.4.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.3.2¶
Version 6.3.2 of the Splunk Add-on for Amazon Web Services was released on February 23, 2023.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.3.2 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.1.x, 8.2.x, 9.0.x |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.3.2 of the Splunk Add-on for AWS version contains the following new and changed features:
- Security related bug fixes. No new features added.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.3.2 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.3.2 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.3.2 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.3.1¶
Version 6.3.1 of the Splunk Add-on for Amazon Web Services was released on January 23, 2022.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.3.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.1.x, 8.2.x, 9.0.x |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.3.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- Returned support for the AWS VPC default log format (v1-v2 fields only)
- Fix for generic S3 upgrade issue
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.3.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.3.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.3.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.3.0¶
Version 6.3.0 of the Splunk Add-on for Amazon Web Services was released on December 12, 2022.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.3.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.1.x, 8.2.x, 9.0.x |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.3.0 of the Splunk Add-on for AWS version contains the following new and changed features:
Starting in version 6.3.0 of the Splunk Add-on for AWS, the VPC Flow log extraction format has been updated to include v3-v5 fields. Before upgrading to versions 6.3.0 and higher of the Splunk Add-on for AWS, Splunk platform deployments ingesting AWS VPC Flow Logs must update the log format in AWS VPC to include v3-v5 fields in order to ensure successful field extractions. For more information on updating the log format in AWS VPC, see the Configure VPC Flow Logs inputs for the Splunk Add-on for AWS topic in this manual.
- Expanded support for VPC FlowLogs, sourcetype
aws:cloudwatchlogs:vpcflow
:- Ingestion of VPC flow logs via SQS-Based S3.
- Support for the parsing of v3-v5 fields defined by AWS for VPC flow logs for both the Splunk defined custom log format and the select all log format.
- Validation of the native delivery of VPC Flow Logs through Kinesis Firehose.
- The addition of an
iam_list_policy
API to the Metadata input to fetch data related to:- Fetch all policies related to IAM using
iam:ListPolicy
. - Fetch permissions data using
iam:GetPolicyVersion
. - To link the users with policy, the following policies
iam:ListUserPolicies
andiam:ListAttachedUserPolicies
were added toIam_users
data.
- Fetch all policies related to IAM using
- Support for the ingestion of
OversizedChangeNotification
events via the AWS Config > Config input. - Expanded support for Network Load Balancer (NLB) access logs. The
new field
elb_type
was created to distinguish between ELB, ALB, and NLB access logs. - UI input page support to enable/disable CSV parsing and custom delimiter definition for Generic S3 & SQS-based S3.
Fields added and fields removed¶
See the following list of fields added and fields removed between the Splunk Add-on for AWS 6.2.0 and 6.3.0:
Source-type | app | Fields added | Fields removed |
---|---|---|---|
[u'aws:elb:accesslogs'] |
AWS ELB | alpn_client_preference_list, destination_ip, connection_time, tls_named_group, log_version, chosen_cert_arn, alpn_be_protocol, domain_name, listener, tls_cipher, chosen_cert_serial, tls_handshake_time, elb_type, tls_protocol_version, destination_port, type, alpn_fe_protocol, incoming_tls_alert |
Source-type | action | Fields added | Fields removed |
---|---|---|---|
[u'aws:cloudwatchlogs:vpcflow'] |
unknown | tcp_flags, flow_direction, pkt_dstaddr, subnet_id, instance_id, traffic_path, pkt_srcaddr, sublocation_type, pkt_dst_aws_service, sublocation_id, vpc_id, type, az_id, pkt_src_aws_service | timestamp |
[u'aws:cloudwatchlogs:vpcflow'] |
blocked, allowed | tcp_flags, flow_direction, pkt_dstaddr, subnet_id, instance_id, traffic_path, pkt_srcaddr, sublocation_type, pkt_dst_aws_service, sublocation_id, vpc_id, type, az_id, pkt_src_aws_service |
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.3.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.3.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.3.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.2.0¶
Version 6.2.0 of the Splunk Add-on for Amazon Web Services was released on July 28th, 2022.
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.2.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector Classic, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.2.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Support for the
Inspector v2
API ingestion method. - Added Common Information Model (CIM) mappings for Inspector v2.
- Deprecation of the Description Input
- Added UI warning message and warning logs for Generic S3 inputs.
- Bug fixes.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.2.0.of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.2.0.of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.2.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.1.0¶
Version 6.1.0 of the Splunk Add-on for Amazon Web Services was released on July 11, 2022
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.1.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.1.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Support for the parsing of CSV files from AWS S3 (Generic S3 and SQS-based S3 ingestion methods)
- Bug fixes.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.1.0.of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.1.0.of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 6.1.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 6.0.0¶
Version 6.0.0 of the Splunk Add-on for Amazon Web Services was released on May 3, 2022
Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all the capabilities of the Splunk Add-on for Amazon Kinesis Firehose. Configure the Splunk Add-on for AWS to ingest across all AWS data sources for ingesting AWS data into Splunk.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues.
Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS.
If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Compatibility¶
Version 6.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, SNS, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 6.0.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Version 6.0.0 of the Splunk Add-on for AWS includes a merge of all
the capabilities of the Splunk Add-on for Amazon Kinesis Firehose:
- Provided support of all the following vendor products which were supported in the Splunk Add-on for Amazon Kinesis Firehose: AWS Identity and Access Management (IAM) Access Analyzer, and AWS Security Hub findings events.
- Support for HTTP Event Collector (HEC) data collection for AWS Cloudtrail, AWS VPC Flowlogs, AWS Guardduty, AWS Identity and Access Management (IAM) Access Analyzer and AWS Security Hub findings.
- Support for the
aws:cloudwatch:guardduty
Splunk Add-on for Kinesis Firehose sourcetype. Support for theaws:cloudwatchlogs:guardduty
sourcetype will be added to a future release of the Splunk Add-on for Amazon Web Services.
- Improved Common Information Model (CIM) mappings.
- UI component upgrades for compatibility with future versions of the Splunk software (Fast and intuitive UI with an improved look and feel).
- Added signature validation for SNS/SQS messages.
- Added Data Manager banner on the Splunk Add-on for AWS home page.
- Updated the source for the Metadata data input to match Data Manager functionality.
If you use both the Splunk Add-on for Amazon Kinesis Firehose as well as the Splunk Add-on for AWS on the same Splunk instance, then you must uninstall the Splunk Add-on for Amazon Kinesis Firehose after upgrading the Splunk Add-on for AWS to version 6.0.0 or later in order to avoid any data duplication and discrepancy issues. Data that you previously onboarded through the Splunk Add-on for Amazon Kinesis Firehose will still be searchable, and your existing searches will be compatible with version 6.0.0 of the Splunk Add-on for AWS. If you are not currently using the Splunk Add-on for Amazon Kinesis Firehose, but plan to use it in the future, then the best practice is to download and configure version 6.0.0 or later of the Splunk Add-on for AWS, instead of the Splunk Add-on for Amazon Kinesis Firehose.
Fixed issues¶
Version 6.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 6.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Added/Removed Common Information Model Fields¶
See the following table for a list of fields added/removed CIM fields between Splunk Add-on for Amazon Web Services v5.2.2 and v6.0.0:
Sourcetype | eventName | Fields added in AWS 5.2.2 | Fields removed in AWS 6.0.0 |
---|---|---|---|
['aws:cloudtrail'] |
DeleteNetworkInterface | object_id, action, status, user, src_user_type, object_attrs, src_user, user_id, object | |
['aws:cloudtrail'] |
UpdateUser | user_id |
Source-type | source | Fields added in AWS 5.2.2 | Fields removed in AWS 6.0.0 |
---|---|---|---|
['aws:metadata'] |
All | image_id |
See the following table for a list of fields added/removed between Splunk Add-on for Amazon Kinesis Firehose v1.3.2 and Splunk Add-on for Amazon Web Services v6.0.0:
Source-type | eventName | Fields added in Kinesis 1.3.2 | Fields removed in AWS 6.0.0 |
---|---|---|---|
['aws:cloudtrail'] |
ListAliases | object_attrs |
Source-type | State | Fields added in Kinesis 1.3.2 | Fields removed in AWS 6.0.0 |
---|---|---|---|
['aws:metadata'] |
All | availability_zone, instance_tenancy, currency_code, instance_count, duration, fixed_price, end, region, description, vm_os, vendor_region, start, vendor_product, offering_type, state, mem_capacity, vm_size, cpu_cores, usage_price, aws_account_id, vendor_account, id |
Source-type | source | Fields added in Kinesis 1.3.2 | Fields removed in AWS 6.0.0 |
---|---|---|---|
['aws:securityhub:finding'] |
aws_eventbridgeevents_securityhub | instance_extract, vpc_extract, accesskey_extract, volume_extract, security_group_extract, managed_instance_extract, s3bucket_extract |
See the following table for a list of fields modified between Splunk Add-on for Amazon Web Services v5.2.2 and v6.0.0:
Sourcetype |
CIM Field |
eventName, Resources{}.Type |
Vendor Field in AWS 5.2.2 |
Vendor Field in AWS 6.0.0 |
---|---|---|---|---|
|
user |
eventName: ConsoleLogin |
userIdentity.principalId, |
userIdentity.principalId OR userIdentity.userName, |
user_id |
eventName: CreateUser, DeleteUser |
userIdentity.principalId OR userIdentity.accountId OR
userIdentity.sessionContext.sessionIssuer.principalId, |
userIdentity.principalId OR userIdentity.accountId OR
userIdentity.sessionContext.sessionIssuer.principalId OR
userIdentity.userName, |
|
action |
eventName: DeleteLoginProfile |
Static Value: deleted,unknown |
Static Value: modified |
|
object_category |
eventName: DeleteNetworkInterface |
Static Value: unknown |
Static Value: network_interface |
|
user_type |
eventName: DeleteNetworkInterface |
userIdentity.type, |
sessionContext.sessionIssuer.type, |
See the following table for a list of fields modified between Splunk Add-on for Amazon Kinesis Firehose v1.3.2 and Splunk Add-on for Amazon Web Services v6.0.0:
Sourcetype |
CIM Field |
eventName, Resources{}.Type |
Vendor Field in Kinesis 1.3.2 |
Vendor Field in AWS 6.0.0 |
---|---|---|---|---|
|
action |
eventName: DeleteLoginProfile |
Static Value: deleted, unknown |
Static Value: modified |
status |
eventName: DeleteNetworkInterface |
Static Value: failure |
Static Value: failure, success |
|
|
dvc |
All |
Static Value: VPC Flow |
interface_id, |
|
account_id |
All |
account_id, |
OwnerId, |
|
dest |
Resources{}.Type: AwsEc2Instance, AwsEc2Volume, AwsIamAccessKey, AwsS3Bucket, AwsEc2Volume, AwsEc2Vpc |
Resources.Details.AwsEc2Instance.IpV4Addresses, |
Resources{}.Id, |
dest_name |
Resources{}.Type: AwsEc2Instance, AwsEc2Volume, AwsIamAccessKey, AwsS3Bucket, AwsEc2Volume, AwsEc2Vpc |
Resources{}.Id, |
CIM model changes¶
Source | eventName | Previous CIM model in AWS 5.2.2 | New CIM model in AWS 6.0.0 |
---|---|---|---|
Sourcetype | State | Previous CIM model in Kinesis 1.3.2 | New CIM model in AWS 6.0.0 |
---|---|---|---|
aws:metadata |
All | Inventory.All_Inventory.Virtual_OS.Snapshot |
Third-party software attributions¶
Version 6.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Third-party software attributions for the Splunk Add-on for Amazon Web Services
Version 5.2.0¶
Version 5.2.0 of the Splunk Add-on for Amazon Web Services was released on October 4, 2021.
Compatibility¶
Version 5.2.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.20 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 5.2.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- CIM 4.20 compatibility and enhanced CIM mapping
- UI component upgrades (jQuery) that are compatible with future versions of the Splunk software.
- The aws:cloudtrail sourcetype is updated for app field mapping.
See the following tables for information on field changes between 5.1.0 and 5.2.0:
Source-type | Fields added | Fields removed |
---|---|---|
aws:cloudfront:accesslogs |
action, app, bytes, bytes_in, bytes_out, c_port, category, cs_protocol_version, dest, duration, fle_encrypted_fields, fle_status, http_content_type, http_method, http_referrer, http_referrer_domain, http_user_agent, http_user_agent_length, response_time, sc_content_len, sc_content_type, sc_range_end, sc_range_start, src,src_ip, src_port, status, time_to_first_byte, uri_path, url, url_domain, url_length, vendor_product, x_edge_detail_result_type | |
aws:cloudtrail |
action, authentication_method, change_type, dest, men_free, object, object_attrs, object_id, rule_action, src_user, src_user_name, src_user_type, status, user_name, vendor_account, vendor_product | user_agent, user_id, user_type |
aws:cloudwatchlogs:guardduty |
body, findingType | |
aws:cloudwatchlogs:vpcflow |
app, protocol_version, user_id, vendor_product, | |
aws:config |
object_id, object_path, result, vendor_account, vendor_product, | |
aws:config:notification |
object_attrs, object_path, result, user, vendor_product | |
aws:description |
enabled, user_id, family, status, description, time, type, snapshot | |
aws:elb:accesslogs |
ActionExecuted, ChosenCertArn, ClientPort, DomainName, ELB, ELBStatusCode, ErrorReason, MatchedRulePriority, ReceivedBytes, RedirectUrl, Request, RequestCreationTime, RequestProcessingTime, RequestTargetIP, RequestTargetPort, RequestType, ResponseProcessingTime, ResponseTime, SSLCipher, SSLProtocol, SentBytes, TargetGroupArn, TargetPort, TargetProcessingTime, TargetStatusCode, TraceId, UserAgent, action, app, bytes, bytes_in, bytes_out, category, dest, dest_port, http_method, http_user_agent, http_user_agent_length, response_time, src, src_ip, src_port, status, url, url_length, vendor_product | |
aws:metadata |
enabled, region, snapshot, status, time, user_id, vendor_region | |
aws:s3 |
AuthType, BucketCreationTime, BucketName, BucketOwner, BytesSent, CipherSuite, ErrorCode, HTTPMethod, HTTPStatus, HostHeader, HostId, ObjectSize, OperationKey, Referer, RemoteIp, RequestID, RequestKey, RequestURI, RequestURIPath, Requester, SignatureVersion, TLSVersion, TotalTime, TurnAroundTime, UserAgent, VersionId, action, bytes, bytes_out, category, dest, error_code, http_method, http_user_agent, http_user_agent_length, operation,response_time, src, src_ip, status, storage_name, url, url_domain, url_length, user, vendor_product | |
aws:s3:accesslogs |
action, category, http_referrer, http_referrer_domain, http_user_agent_length, src_ip,status, storage_name, url, url_length, vendor_product |
See the following table for a list of fields modified between 5.1.0 and 5.2.0:
Sourcetype |
CIM Field |
eventName, resourceID, resourceType, or source |
Vendor Field in 5.1.0 |
Vendor Field in 5.2.0 |
---|---|---|---|---|
aws:cloudtrail |
app |
eventName: All |
eventSource, |
eventType, |
user |
eventName: AssumeRole |
userIdentity.principalId, |
requestParameters.roleArn OR
responseElements.assumedRoleUser.arn, |
|
eventNames: AssumeRoleWithSAML, AssumeRoleWithWebIdentity |
userIdentity.principalId, |
requestParameters.roleArn, |
||
eventNames: AttachVolume, AuthorizeSecurityGroupEgress, AuthorizeSecurityGroupIngress, CheckMfa, ConsoleLogin, CreateAccessKey, CreateBucket, CreateChangeSet, CreateDeliveryStream, CreateFunction20150331, CreateKeyspace, CreateLoadBalancerListeners, CreateLoadBalancerPolicy, CreateLogGroup, CreateLogStream, CreateLoginProfile, CreateNetworkAcl, CreateNetworkAclEntry, CreateNetworkInterface, CreateQueue, CreateSecurityGroup, CreateTable, CreateUser, CreateVirtualMFADevice, CreateVolume, DeleteNetworkAcl, DeleteNetworkAclEntry, DeleteSecurityGroup, DeleteVolume, DetachVolume, GetFederationToken, GetSessionToken, PutBucketAcl, PutBucketPublicAccessBlock, PutObject, RebootInstances, RevokeSecurityGroupEgress, ReplaceNetworkAclAssociation, ReplaceNetworkAclEntry, RevokeSecurityGroupIngress |
userIdentity.principalId, |
userIdentity.userName, |
||
eventNames: GetAccountSummary, GetUser, ListAccessKeys, ListAccountAliases, ListSigningCertificates - Failure Event |
userIdentity.principalId, |
errorMessage, |
||
eventNames: GetBucketEncryption, ListAliases, ListRoles |
userIdentity.principalId, |
userIdentity.sessionContext.sessionIssuer.userName, |
||
eventName: PutBucketAcl |
requestParameters.AccessControlPolicy.AccessControlList.Grant{}.Grantee.DisplayName
OR
requestParameters.AccessControlPolicy.AccessControlList.Grant{}.Grantee.URI, |
userIdentity.userName, |
||
eventNames: RunInstances, StartInstances, StopInstances, TerminateInstances |
userIdentity.principalId, |
userIdentity.userName OR
userIdentity.sessionContext.sessionIssuer.userName, |
||
eventName: UpdateUser |
requestParameters.userName, |
requestParameters.newUserName, |
||
user_type |
eventNames: AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity |
userIdentity.type, |
resources{}.type OR responseElements.assumedRoleUser.arn, |
|
eventNames: ListAliases, ListRoles |
userIdentity.type, |
userIdentity.sessionContext.sessionIssuer.type, |
||
eventName: PutBucketAcl |
requestParameters.AccessControlPolicy.AccessControlList.Grant{}.Grantee.xsi:type, |
userIdentity.type, |
||
src_user |
eventNames: AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity |
userIdentity.principalId, |
userIdentity.userName OR requestParameters.sourceIdentity OR
userIdentity.sessionContext.sessionIssuer.userName, |
|
eventName: CreateUser |
userIdentity.principalId, |
userIdentity.principalId, |
||
eventNames: DeleteUser, GetUser, PutBucketAcl, UpdateUser |
userIdentity.principalId, |
userIdentity.userName, |
||
src_user_id |
eventNames: AssumeRole, AssumeRoleWithSAML |
userIdentity.principalId, |
userIdentity.principalId OR
userIdentity.sessionContext.sessionIssuer.principalId, |
|
eventName: rowspan="4"|user_id |
AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, |
userIdentity.principalId, |
responseElements.assumedRoleUser.assumedRoleId |
|
eventNames: AttachVolume, AuthorizeSecurityGroupEgress, AuthorizeSecurityGroupIngress, CreateAccessKey, CreateBucket, CreateChangeSet, CreateDeliveryStream, CreateFunction20150331, CreateNetworkAcl, CreateNetworkAclEntry, CreateSecurityGroup, CreateTable, CreateVirtualMFADevice, DeleteBucket, DeleteNetworkAcl, DeleteSecurityGroup, DeleteVolume, GetAccountSummary, ListSigningCertificates, PutBucketPublicAccessBlock, RebootInstances, ReplaceNetworkAclEntry, RevokeSecurityGroupEgress, RevokeSecurityGroupIngress, RunInstances, StartInstances, StopInstances, TerminateInstances |
userIdentity.principalId, |
userIdentity.userName, |
||
eventName: ConsoleLogin |
userIdentity.principalId, |
userIdentity.principalId OR userIdentity.accountId OR
userIdentity.sessionContext.sessionIssuer.principalId, |
||
eventNames: ListAliases, ListRoles |
userIdentity.principalId, |
userIdentity.sessionContext.sessionIssuer.principalId, |
||
object_category |
eventNames: AttachVolume, DeleteVolume, DetachVolume |
Static Value: disk |
Static Value: volume |
|
eventNames: AuthorizeSecurityGroupEgress, AuthorizeSecurityGroupIngress, CreateSecurityGroup, DeleteSecurityGroup, RevokeSecurityGroupEgress, RevokeSecurityGroupIngress |
Static Value: firewall |
Static Value: security_group |
||
eventNames: CreateAccessKey, CreateLoginProfile, CreateVirtualMFADevice, GetAccountSummary, GetUser, ListAccessKeys, ListAccountAliases, ListRoles, ListSigningCertificates |
Static Value: unknown |
Static Value: user |
||
eventNames: CreateBucket, DeleteBucket, PutBucket, PublicAccessBlock, PutObject |
Static Value: storage |
Static Value: bucket |
||
eventName: CreateChangeSet |
Static Value: unknown |
Static Value: stack |
||
eventName: CreateDeliveryStream |
Static Value: unknown |
Static Value: delivery_stream |
||
eventName: CreateFunction20150331 |
Static Value: unknown |
Static Value: function |
||
eventName: CreateKeyspace |
Static Value: unknown |
Static Value: keyspace |
||
eventNames: CreateLoadBalancerListeners, CreateLoadBalancerPolicy |
Static Value: unknown |
Static Value: load_balancer |
||
eventName: CreateLogGroup |
Static Value: unknown |
Static Value: log_group |
||
eventName: CreateLogStream |
Static Value: unknown |
Static Value: log_stream |
||
eventNames: CreateNetworkAcl, CreateNetworkAclEntry, DeleteNetworkAcl, DeleteNetworkAclEntry, ReplaceNetworkAclAssociation, ReplaceNetworkAclEntry |
Static Value: unknown |
Static Value: ACL |
||
eventName: CreateNetworkInterface |
Static Value: unknown |
Static Value: network_interface |
||
eventName: CreateQueue |
Static Value: unknown |
Static Value: message_queue |
||
eventName: CreateTable |
Static Value: unknown |
Static Value: table |
||
eventNames: GetBucketEncryption, PutBucketAcl |
Static Value: unknown |
Static Value: bucket |
||
eventName: ListAliases |
Static Value: unknown |
Static Value: alias |
||
user_idchange_type |
eventNames: AttachVolume, CreateVolume, DeleteVolume, DetachVolume |
Static Value: EC2 |
Static Value: storage |
|
eventNames: AuthorizeSecurityGroupEgress, AuthorizeSecurityGroupIngress, CreateNetworkAcl, CreateNetworkAclEntry, CreateNetworkInterface, CreateSecurityGroup, DeleteNetworkAcl, DeleteNetworkAclEntry, DeleteSecurityGroup, ReplaceNetworkAclAssociation, ReplaceNetworkAclEntry, RevokeSecurityGroupEgress, RevokeSecurityGroupIngress |
Static Value: EC2 |
Static Value: firewall |
||
eventNames: CreateAccessKey, CreateLoginProfile, CreateUser, CreateVirtualMFADevice, DeleteUser, GetAccountSummary, GetUser, ListAccessKeys, ListAccountAliases, ListRoles, ListSigningCertificates, ListSigningCertificates, UpdateUser |
Static Value: IAM |
Static Value: AAA |
||
eventNames: GetFederationToken, GetSessionToken |
Static Value: STS |
Static Value: AAA |
||
eventNames: RunInstances, RebootInstances, StartInstances, StopInstances, TerminateInstances |
Static Value: EC2 |
Static Value: virtual_server |
||
dest |
eventName: AttachVolume |
requestParameters.volumeId, |
requestParameters.instanceId, |
|
eventNames: AuthorizeSecurityGroupEgress, AuthorizeSecurityGroupIngress, CreateSecurityGroup, RevokeSecurityGroupEgress, RevokeSecurityGroupIngress |
requestParameters.groupId, |
eventSource, |
||
eventName: ConsoleLogin |
eventSource, |
additionalEventData.LoginTo OR eventSource, |
||
eventNames: CreateBucket, DeleteBucket, GetBucketEncryption, PutBucketAcl, PutBucketPublicAccessBlock, PutObject |
requestParameters.bucketName, |
requestParameters.Host OR requestParameters.host{}, |
||
eventNames: CreateNetworkAcl, CreateNetworkAclEntry |
requestParameters.networkAclId OR
responseElements.networkAcl.networkAclId, |
eventSource, |
||
eventName: CreateUser |
responseElements.user.userId, |
eventSource, |
||
eventNames: CreateVolume, DeleteVolume |
responseElements.volumeId, |
eventSource, |
||
eventNames: DeleteUser, UpdateUser |
requestParameters.userName, |
eventSource, |
||
eventName: DetachVolume |
responseElements.volumeId, |
responseElements.instanceId, |
||
eventNames: RunInstances, StartInstances |
responseElements.instancesSet.items{}.instanceId, |
responseElements.instancesSet.items{}.instanceId OR
eventSource, |
||
action |
eventNames: CreateAccessKey, CreateLoginProfile, CreateNetworkAclEntry, CreateVirtualMFADevice, DeleteNetworkAclEntry |
Static Value: created |
Static Value: modified |
|
eventNames: GetAccountSummary, GetUser, ListAccessKeys, ListAccountAliases, ListSigningCertificates |
Static Value: unknown |
Static Value: read |
||
protocol |
eventName: CreateNetworkAclEntry |
Static Value: TCP |
Static Value: IP |
|
object_attrs |
eventName: PutBucketAcl |
requestParameters.AccessControlPolicy.AccessControlList.Grant{}.Permission, |
Static value: AccessControlList |
|
object |
eventName: RunInstances |
responseElements.instancesSet.items{}.instanceId, |
responseElements.instancesSet.items{}.instanceId OR
eventSource, |
|
eventName: StartInstances |
requestParameters.instancesSet.items{}.instanceId, |
requestParameters.instancesSet.items{}.instanceId OR
eventSource, |
||
eventName: UpdateUser |
requestParameters.userName, |
requestParameters.newUserName, |
||
object_id |
eventName: StartInstances |
requestParameters.instancesSet.items{}.instanceId, example: i-pjk4yh53x5xy3kldx |
requestParameters.instancesSet.items{}.instanceId OR eventSource, example: i-pjk4yh53x5xy3kldx |
|
eventName: UpdateUser |
requestParameters.userName, |
requestParameters.newUserName, |
||
aws:config |
object_category |
resourceIDs: AWS::Redshift::ClusterSnapshot, AWS::Config::ResourceCompliance |
Static Value: unknown |
Statc Value: file |
object_id |
resourceIDs: AWS::Redshift::ClusterSnapshot, AWS::EC2::NetworkInterface |
ARN, |
resourceId, |
|
aws:config:notification |
object_category |
resourceTypes: AWS::Config::ResourceCompliance, AWS::Redshift::ClusterSnapshot |
Static Value: unknown |
Static Value: file |
object_id |
resourceTypes: All |
N/A |
resourceId, |
|
aws:description |
user_id |
source: All |
UserId, |
UserID, |
status |
source: *ec2_instances |
status, |
image.attributes.state OR state OR status, |
|
aws:cloudwatchlogs:guardduty |
dest_type |
N/A |
Static value from lookup, |
detail.resource.resourceType, |
user |
N/A |
detail.resource.accessKeyDetails.principleId, |
detail.resource.accessKeyDetails.userName, |
|
severity |
N/A |
Static Value: LOW, MEDIUM, HIGH |
Static Value: low, medium, high |
|
aws:s3:accesslogs |
bytes |
N/A |
bytes, |
bytes_sent, |
response_time |
N/A |
turn_around_time, |
total_time, |
CIM model changes¶
See the following CIM model changes between 5.1.0 and 5.2.0:
Sourcetype | metric_name | Previous CIM model | New CIM model |
---|---|---|---|
aws:cloudwatch |
FreeableMemory | Database:Stats, All_Performance:Memory | All_Performance:Memory |
Sourcetype | eventName | Previous CIM model | New CIM model |
---|---|---|---|
aws:cloudtrail |
AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, GetFederationToken, GetSessionToken | Authentication:Default_Authentication | |
aws:cloudtrail |
GetBucketEncryption, PutBucketAcl | Change:Account_Management | Change:All_Changes |
aws:cloudtrail |
GetBucketEncryption, PutBucketAcl | Change:Account_Management | Change:All_Changes |
aws:cloudtrail |
ListRoles, ListAliases | Change:All_Changes | |
aws:cloudtrail |
RunInstances | Change:Endpoint_Changes, Change:Instance_Changes | Change:Instance_Changes |
Sourcetype | source | Previous CIM model | New CIM model |
---|---|---|---|
aws:description |
*:ec2_instances, *:ec2_images | All_Inventory | All_Inventory:Virtual_OS:Snapshot |
aws:description |
*:ec2_instances | All_Inventory | All_Inventory:Virtual_OS:Snapshot |
aws:inspector |
*:inspector:assessmentRun | All_Inventory:Newtwok, All_Inventory:User, All_Inventory:Virtual_OS:Snapshot |
Sourcetype | Previous CIM model | New CIM model |
---|---|---|
aws:cloudfront:accesslogs, aws:elb:accesslogs |
Web | |
aws:cloudwatchlogs:guardduty |
Alerts, Malware_Attacks | Alerts |
aws:config:rule |
All_Inventory:Network, All_Inventory:Virtual_OS:Snapshot | Alerts |
aws:s3 |
Web:Storage |
Fixed issues¶
Version 5.2.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.2.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 5.2.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Version 5.1.0¶
Version 5.1.0 of the Splunk Add-on for Amazon Web Services was released on July 2, 2021.
Compatibility¶
Version 5.1.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.18 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, Metadata, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 5.1.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- A new data input called Metadata. The Metadata input , which can be accessed in Splunk Web by clicking Create New Input > Description > Metadata, uses the boto3 package to collect Description data. See the Metadata input topic in this manual for more information.
- Migrated the following data inputs from the boto2 package to the
boto3 package:
- Cloudtrail
- Config
- Cloudwatch logs.
- Generic S3
- Support for Regional endpoints for all data inputs. Each API call can be made to a region-specific endpoint, instead than a public endpoint.
-
Support for private endpoints for the following data inputs:
- Billing Cost and Usage Reports (CUR)
- Cloudtrail
- Cloudwatch
- Cloudwatch Logs
- Generic S3
- Incremental S3
- Kinesis
- SQS-based S3
Private endpoints can perform account authentication and data collection for each supported input. For example, a Splunk instance within a Virtual Private Cloud (VPC) infrastructure. - Support for disabling the DLQ (Dead Letter Queue) check for SQS-based S3 Crowdstrike event inputs.
The Description input will be deprecated in a future release. The Metadata input has been added as a replacement. The best practice is to begin moving your workloads to the Metadata input.
Fixed issues¶
Version 5.1.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.1.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 5.1.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Httplib2
- Python SortedContainer
- remote-pdb
- requests
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python
- urllib3
Version 5.0.4¶
Version 5.0.4 of the Splunk Add-on for Amazon Web Services was released on June 2, 2021.
Compatibility¶
Version 5.0.4 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.18 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 5.0.4 of the Splunk Add-on for AWS version contains the following new and changed features:
- Simple Queue Service (SQS) modular input support for Crowdstrike Falcon Data Replicator (FDR)
- Bug fixes.
Fixed issues¶
Version 5.0.4 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.0.4 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
The Splunk Add-on for AWS version 5.x.x is incompatible with Splunk Enterprise versions 7.x.x and earlier.
Third-party software attributions¶
Version 5.0.4 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Python SortedContainer
- remote-pdb
- requests
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python
- urllib3
Version 5.0.3¶
Version 5.0.3 of the Splunk Add-on for Amazon Web Services was released on October 8, 2020.
Compatibility¶
Version 5.0.3 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.
For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.
New features¶
Version 5.0.3 of the Splunk Add-on for AWS version contains the following new and changed features:
- Bug fix with proxy behavior not working as expected.
- Bug fix with
no_proxy
taking effect with https. - SQS modular input for proxy configuration code fix (Microsoft Windows only)
Fixed issues¶
Version 5.0.3 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.0.3 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
The Splunk Add-on for AWS version 5.x.x is incompatible with Splunk Enterprise versions 7.x.x and earlier.
Third-party software attributions¶
Version 5.0.3 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Python SortedContainer
- remote-pdb
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python210
- urllib3
Version 5.0.2¶
Version 5.0.2 of the Splunk Add-on for Amazon Web Services was released on August 22, 2020.
Compatibility¶
Version 5.0.2 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
New features¶
Version 5.0.2 of the Splunk Add-on for AWS version contains the following new and changed features:
- Increased Network Traffic CIM data model compatibility.
- Increased Change CIM data model compatibility.
- Improved support for the Splunk Enterprise Security Assets and Identities Framework Interface
Fixed issues¶
Version 5.0.2 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.0.2 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
The Splunk Add-on for AWS version 5.x.x is incompatible with Splunk Enterprise versions 7.x.x and earlier.
Third-party software attributions¶
Version 5.0.2 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Httplib2
- Python SortedContainer
- remote-pdb
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python210
- urllib3
Version 5.0.1¶
Version 5.0.1 of the Splunk Add-on for Amazon Web Services was released on May 13, 2020.
Compatibility¶
Version 5.0.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
Versions 5.0.0 and above of the Splunk Add-on for AWS are Python 3 releases, and only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 or above on these versions of the Splunk platform.
New features¶
Version 5.0.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- FIPS compliance release for Python 3
- Improved Support for the Authentication CIM Model.
Fixed issues¶
Version 5.0.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.0.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
The Splunk Add-on for AWS version 5.x.x is incompatible with Splunk Enterprise versions 7.x.x and earlier.
Third-party software attributions¶
Version 5.0.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Httplib2
- Python SortedContainer
- remote-pdb
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python210
- urllib3
Version 5.0.0¶
Version 5.0.0 of the Splunk Add-on for Amazon Web Services was released on December 19, 2019.
Compatibility¶
Version 5.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 8.0 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
Version 5.0.0 of the Splunk Add-on for AWS is a Python 3 release and is only compatible with Splunk platform versions 8.0.0 and later. To use version 5.0.0 or later of this add-on, upgrade your Splunk platform deployment to version 8.0.0 or later. For users of Splunk platforms 6.x.x and Splunk 7.x.x, the Splunk Add-on for Amazon Web Services version 4.6.1 is supported. Do not upgrade to Splunk Add-on for AWS 5.0.0 on these versions of the Splunk platform.
New features¶
Version 5.0.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Support for Python3
- Python2 is no longer supported, starting in version 5.0.0 of the Splunk Add-on for AWS.
Fixed issues¶
Version 5.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 5.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
The Splunk Add-on for AWS version 5.x.x is incompatible with Splunk Enterprise versions 7.x.x and earlier.
Third-party software attributions¶
Version 5.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- atomicwrites
- babel-polyfill
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- jquery.ui.autocomplete
- Httplib2
- Python SortedContainer
- remote-pdb
- s3transfer
- select2
- six.py
- SortedContainers
- u-msgpack-python210
- urllib3
Version 4.6.1¶
Version 4.6.1 of the Splunk Add-on for Amazon Web Services was released on December 10, 2019.
Compatibility¶
Version 4.6.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 6.5 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
New features¶
Version 4.6.1 of the Splunk Add-on for AWS version contains the following new and changed features:
- FIPS compliance
- Updated third party components
Fixed issues¶
Version 4.6.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.6.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.6.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- requests
- SortedContainers
- select2
- splunksdk
- u-msgpack-python
- urllib3
Version 4.6.0¶
Version 4.6.0 of the Splunk Add-on for Amazon Web Services was released on October 3, 2018.
Compatibility¶
Version 4.6.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms:
Splunk platform versions | 6.5 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
New features¶
Version 4.6.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- CloudWatch Metrics input to enable discovery of new entities without Splunk restart
- Metrics store support (requires a Splunk forwarder version 7.2.0 or above.)
- Ability to detect configuration of SSL on management port
- Line/event breaking enforcement for ELB/S3 Access Logs
- Support for Splunk Enterprise 7.2.0
Fixed issues¶
Version 4.6.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.6.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.6.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.5.0¶
Version 4.5.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.5 and later |
CIM | 4.3 and later |
Supported OS for data collection | Platform independent |
Vendor products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
New features¶
Version 4.5.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Support for the configuration of billing inputs to collect Cost and
Usage Report data (sourcetype:
aws:billing:cur
).
Fixed issues¶
Version 4.5.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.5.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.5.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.4.0¶
Version 4.4.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.5 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
New features¶
Version 4.4.0 of the Splunk Add-on for AWS version contains the following new and changed features:
- Splunk Add-on for AWS 4.4.0 is only compatible with Splunk App for AWS 5.1.0. Previous versions of Splunk App for AWS are not supported.
- Optimized Web UI for better usability and more streamlined
configuration workflow
- The Create New Input menu has been redesigned with all the menu options organized by the type of data to collect.
- Two separate configuration pages are now available for Generic S3 and Incremental S3 input types respectively. Previously, the two different input types were configured in one configuration page.
- Input configuration fields are now grouped into AWS Input Configuration, Splunk-related Configuration, and Advanced Settings sections on the Web UI.
- Redesigned input configuration UIs for CloudWatch and Config input types let you create multiple inputs all at once.
- Added a new Temp Folder setting to the Billing input type configuration, which lets you specify a non-default folder for temporarily storing downloaded detailed billing report .zip files when the system default temp folder does not provide sufficient space.
- You can now configure SQS-based S3 inputs to index non-AWS custom logs in plain text in addition to its supported AWS log types.
- SQS-based S3 input type now supports CloudTrail and Config SQS notifications.
- Assume Role is now supported in SQS, Config Rule, and Inspector input types.
- The Description input type now supports the iam_users service.
Upgrade¶
To upgrade from versions 4.3 and below, AWS users must be given
permission to use the ec2:RunInstances
API action, and depending on
deployment, the following API actions:
API Action | Description |
---|---|
ec2:DescribeImages |
Allows users to view and select an AMI. |
ec2:DescribeVpcs |
Allows users to view the available EC2-Classic and virtual private clouds (VPCs) network options. This API action is required even if you are not launching into a VPC. |
ec2:DescribeSubnets |
Allows users to view all available subnets for the chosen VPC, when launching into a VPC. |
ec2:DescribeSecurityGroups |
Allows users to view the security groups page in the wizard. Users can select an existing security group. |
ec2:DescribeKeyPairs or ec2:CreateKeyPair |
Allows users to select an existing key pair, or create a new key pair. |
See the Configure Description permissions topic in this manual for more information on how to configure AWS permissions.
See the AWS documentation for more information on the DescribeImages function. https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html.
Fixed issues¶
Version 4.4.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.4.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.4.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.3.0¶
Version 4.3.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.4 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Logs, Billing services, SQS, and SNS. |
New features¶
Version 4.3.0 of the Splunk Add-on for AWS contains the following new and changed features:
- SQS-based S3 input type A multi-purpose input type that collects several types of logs in response to messages polled from SQS queues. A scalable and higher-performing alternative to the generic S3 and incremental S3 input types. See Multi-purpose input types.
- Heath Check dashboards Health Overview and S3 Health dashboards to help you troubleshoot data collection errors and performance issues. See Health Check dashboards.
- Optimized logging. See Internal logs.
Fixed issues¶
Version 4.3.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.3.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.3.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.2.3¶
Version 4.2.3 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.4 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.2.3 of the Splunk Add-on for AWS does not contain any new features.
Fixed issues¶
Version 4.2.3 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.2.3 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.2.3 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.2.2¶
Version 4.2.2 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.2.2 of the Splunk Add-on for AWS does not contain any new features.
Fixed issues¶
Version 4.2.2 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.2.2 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.2.2 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.2.1¶
Version 4.2.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Added support for two new AWS regions: EU (London) and Canada (Central).
Fixed issues¶
Version 4.2.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.2.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.2.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.2.0¶
Version 4.2.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3 and later |
CIM | 4.3 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.2.0 of the Splunk Add-on for Amazon Web Services supports the AWS Security Token Service (AWS STS) AssumeRole API action that lets you use IAM roles to delegate permissions to IAM users to access these AWS resources. You can configure accounts to use AssumeRole in these data inputs: S3 (general and incremental), Billing, Description, CloudWatch, Kinesis.
Fixed issues¶
Version 4.2.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.2.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.2.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.1.2¶
Version 4.1.2 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3, 6.4, 6.5 |
CIM | 4.3 or later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.1.2 of the Splunk Add-on for Amazon Web Services contains no new features.
Fixed issues¶
Version 4.1.2 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Known issues¶
Version 4.1.2 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.1.2 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.1.1¶
Version 4.1.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3, 6.4 and 6.5 |
CIM | 4.3 or later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.1.1 of the Splunk Add-on for Amazon Web Services contains no new features.
Fixed issues¶
Version 4.1.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Issue number | Description |
---|---|---|
2016-10-12 | ADDON-11604 | Incremental S3 fails to collect data using the IAM role. |
2016-09-30 | ADDON-11470 | The inputs page cannot display more than 30 inputs (S3 as input). |
2016-10-11 | ADDON-11498, ADDON-11488 | Ingesting data from aws:cloudwatchlogs results in invalid JSON format with extraneous trailing angle brackets. |
2016-10-04 | ADDON-11482 | Cloudtrail/SQS fails to collect data using the IAM role. |
Known issues¶
Version 4.1.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.1.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.1.0¶
Version 4.1.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3, 6.4 |
CIM | 4.3 or later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, Billing services, SQS, and SNS. |
New features¶
Version 4.1.0 of the Splunk Add-on for Amazon Web Services has the following new features.
Date | Issue number | Description |
---|---|---|
2016-09-22 | ADDON-6145 | Add AWS SQS modular input for Splunk add-on for AWS. |
2016-09-22 | ADDON-6146 | Add custom alert to AWS SNS for Splunk add-on for AWS. |
2016-09-22 | ADDON-10952 | Performance enhancement for AWS Cloudtrail modular input. |
2016-09-22 | ADDON-11149 | Add Record Format field for AWS Kinesis modular input. |
2016-09-22 | ADDON-10917 | Mapping to ITSI IaaS data module. |
2016-09-22 | ADDON-10941 | Add new incremental data collection for S3 modular input. |
2016-09-22 | ADDON-10414 | Checkpoint and performance enhancement for S3 modular input. |
2016-09-22 | ADDON-10906 | Performance and API call enhancement for Cloudwatch modular input. |
Fixed issues¶
Version 4.1.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Issue number | Description |
---|---|---|
2016-09-20 | ADDON-11251 | There will be data loss of AWS S3 input if the network connection is bad. |
2016-09-20 | ADDON-11196 | If there is a blank space at the beginning or the end of the input name (or both). The input name displays on the UI is not the consistent with the one saved in the configuration file. |
2016-09-20 | ADDON-11056 | In the AWS Region list, it displays ap-northeast-2 instead of Seoul. |
2016-09-20 | ADDON-10980 | Line breaker error for AWS S3 input. |
2016-09-14 | ADDON-10186 | AWS Config fails to fetch S3 object in AWS GovCloud (US) region. |
2016-09-09 | ADDON-11009 | Vanguard: Not getting data from 1 of 3 S3 inputs. This is considered critical for the customer as they have PS on site. |
2016-08-18 | ADDON-10137 | If the number of the AWS input exceeds 30, some of the inputs cannot run successfully. |
2016-09-14 | ADDON-9778 | There are some errors of AWS Kinesis modular input if the request from HEC exceeds its max limit. |
2016-09-05 | ADDON-9732 | Failed to get proxy credentials when password includes # character. |
2016-08-28 | ADDON-9533 | The default Dimension Name is empty square brackets for Autoscaling and EBS namespaces. |
2016-08-08 | ADDON-9328 | CloudWatch data input encounters API rate limit for large metrics. |
2016-09-09 | ADDON-8758 | Mixing log types or gzip with plain text in the same stream causes knowledge extraction to fail for CloudWatch Logs data collected through Kinesis |
Known issues¶
Version 4.1.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.1.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 4.0.0¶
Version 4.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.2.X and later |
CIM | 4.0 and later |
Platforms | Platform independent |
Vendor Products | Amazon Web Services CloudTrail, CloudWatch, CloudWatch Logs, Config, Config Rules, Inspector, Kinesis, S3, VPC Flow Log, and Billing services |
Upgrade¶
If you are upgrading from a previous version of the Splunk Add-on for AWS, be aware of the following changes which may require some actions to preserve the functionality of your existing accounts and inputs:
- This release includes three new inputs that each require new IAM permissions. Be sure to adjust the IAM permissions of your existing accounts if you want to use them to collect these new data sources.
- If you are upgrading directly from version 2.0.0 or earlier of this add-on to the 4.0.0 version, you need to open and resave the AWS accounts using the Splunk Add-on for AWS account UI.
- In this version, the CloudWatch input is rearchitected for better performance and improved stability. One result of this new architecture is that the input has a built in four minute delay after a polling period has ended for any given metric before the actual data collection occurs. This change ensures that there is no data loss due to latency on the AWS side.
- This version requires a single selection for the Region Category for each AWS account. If you added accounts before region category selection was required, or if you added accounts and selected more than one region category for a single account, the upgrade to version 4.0.0 will put these accounts into an error state until you edit them to select a single region category. On your data collection node, open the add-on and check your Configuration tab to see if any of your existing accounts are missing a region category. If they are, edit the account to add the region category. Any inputs using accounts that were determined to be in error stop collecting data until the account has a region category assigned. Once the account error is resolved, the affected inputs start collecting data again automatically starting from the point when data collection stopped.
New Features¶
Version 4.0.0 of the Splunk Add-on for Amazon Web Services has the following new features.
Resolved date | Issue number | Description |
---|---|---|
2016-04-29 | ADDON-7042 | CloudWatch input configuration UI now provides auto-filled correct default JSON for metrics and dimensions in each namespace. |
2016-04-08 | ADDON-7587 | Support for AWS Signature V.4 managed keys for S3 related data collection. |
2016-04-05 | ADDON-7818 | New input and CIM mapping for Amazon Inspector data. |
2016-04-05 | ADDON-7817 | New input and CIM mapping for AWS Config rules data. |
2016-04-05 | ADDON-5391 | New input for data from Kinesis streams, including high volume VPC flow log data. |
2016-03-31 | ADDON-6811 | Support for using an EC2 IAM role instead of an AWS account when the add-on’s collection node is on your own managed AWS instance. |
2016-03-23 | ADDON-7872 | Support for the Seoul region. |
2016-01-08 | ADDON-7311 | Support for setting an initial scan time in the Billing input if configuring using the conf files. |
Fixed issues¶
Version 4.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date |
Defect number |
Description |
---|---|---|
2016-05-04 |
ADDON-9169 |
Monthly Billing is not indexed using the UTC timezone |
2016-04-19 |
ADDON-8801 |
Billing initial scan time should not use last modified time of S3 key |
2016-04-15 |
ADDON-8721 |
Sourcetype="aws:cloudwatchlogs:vpcflow" handles src and dest incorrectly |
2016-04-11 |
ADDON-8686 |
S3 input UI cannot display custom source types when user edits the input. |
2016-04-03 |
ADDON-8547 |
S3 modular input loses data if new keys are generated during the key listing process |
2016-04-02 |
ADDON-8546 |
S3 logging is unclear, should include indication of which input stanza is involved. |
2016-03-31 |
ADDON-8548 |
CloudWatch collection failing with "Failed to get proxy information Empty" |
2016-03-15 |
ADDON-8299 |
S3 input cannot progress if keys are deleted during the data collection. |
2016-02-29 |
ADDON-8705 |
Add-on throws "is not JSON serializable" error when calling AWS API for ELB information |
2016-02-25 |
ADDON-7969 |
CloudWatch has performance problems in large AWS accounts. |
2016-02-24 |
ADDON-7957 |
Unnecessary tag expansion slows performance. |
2016-02-24 |
ADDON-7926 |
Default value of max_file_size_csv_zip_in_bytes is too small to handle large detailed billing reports |
2016-02-22 |
ADDON-7897 |
s3util.py list_cloudwatch_namespaces has performance issue |
2016-02-19 |
ADDON-7877 |
Upon upgrade from version 2.0.X, S3 inputs experience two
problems. |
2016-02-14 |
ADDON-7777 |
Not all fields are parsed for CloudFront |
2016-02-13 |
ADDON-7778 |
Cannot create new input when Splunk does not have a user named "admin" |
2016-02-13 |
ADDON-7776 |
CloudFront logs should be urldecoded |
2016-01-25 |
ADDON-7573 |
CloudWatch input requests too many data points in long time windows. |
2016-01-18 |
ADDON-7701 |
CloudWatch fails to gather data when no metrics appear in a namespace for more than 12 hours. |
2015-09-11 |
ADDON-5498 |
Unclear error: Unexpected error "<class 'socket.error'>" from python handler: " Connection refused" when user specifies all regions in CloudWatch for one namespace, saves the configuration, and reloads it. |
2015-09-10 |
ADDON-5469 |
Missing or improper default value for un-required fields. |
Known issues¶
Version 4.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Third-party software attributions¶
Version 4.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- boto3
- botocore
- dateutils
- docutils
- jmespath
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
- urllib3
Version 3.0.0¶
Version 3.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.2.X and later |
CIM | 4.0 and later |
Platforms | Platform independent |
Vendor Products | AWS CloudTrail, CloudWatch, CloudWatch Logs, Config, Billing, S3 |
Upgrade guide¶
This release includes some changes to the S3 input configuration that break backwards compatibility. If you are upgrading from a previous version and had previously used any of the following parameters, review the new behavior noted here and make any necessary changes in your existing S3 inputs:
interval
now refers to how long splunkd should wait before checking the health of the modular input and restarting it if it has crashed. The new argumentpolling_interval
, still shown as Interval in the UI, handles the data collection interval. If you had a custom value configured, the 3.0.0 version of the add-on copies your custom setting to thepolling_interval
value so that your data collection behavior does not change. However, you may wish to tune theinterval
value to enable splunkd to check for input crashes more frequently.is_secure
is deprecated and removed, but the parameter is retained indefault/inputs.conf
to avoid spec file violations. All traffic is over https. If you have this parameter in yourlocal/inputs.conf
, it will have no effect.max_items
is deprecated and removed, but the parameter is retained indefault/inputs.conf
to avoid spec file violations. It is set to 100000 items. If you have this parameter in yourlocal/inputs.conf
, it will have no effect.queueSize
is deprecated and removed. If you have this parameter in yourlocal/inputs.conf
, remove it to avoid potential data loss.persistentQueueSize
is deprecated and removed. If you have this parameter in yourlocal/inputs.conf
, remove it to avoid potential data loss.recursion_depth
is deprecated and removed, but the parameter is retained indefault/inputs.conf
to avoid spec file violations. The input recursively scans all subdirectories. If you have this parameter in yourlocal/inputs.conf
, it will have no effect.ct_excluded_events_index
is deprecated and removed, but the parameter is retained indefault/inputs.conf
to avoid spec file violations. Excluded events will be discarded. If you have this parameter in yourlocal/inputs.conf
, it will have no effect.
New features¶
Version 3.0.0 of the Splunk Add-on for Amazon Web Services has the following new features.
Resolved date | Issue number | Description |
---|---|---|
2015-11-16 | ADDON-6690 | Add-on configuration screen serves a warning message when you access it on a Splunk search head to remind you to configure it on heavy forwarders as a best practice. |
2015-12-23 | ADDON-6870 | Support for GovCloud and China regions in the configuration UI. |
2015-12-22 | ADDON-6862 | Support in the configuration UI and backend for new source types: aws:s3:accesslogs , aws:cloudfront:accesslogs , aws:elb:accesslogs |
2015-12-17 | ADDON-6190 | CloudWatch input refreshes the resource ID list every few hours so as to include additional resources to a wildcarded statement. |
2015-12-17 | ADDON-6187 | CloudWatch collects S3 key count and total size of all keys in buckets. |
2015-12-15 | ADDON-6864 | S3 modular input backend automatically detects the region, thus supporting bucket names with dots in them without user’s needing to specify a region-specific endpoint. |
2015-12-15 | ADDON-6854 | Deprecation of character_set parameter for S3 input. Input supports auto-detection among UTF-8 with/without BOM, UTF-16LE/BE with BOM, UTF-32BE/LE with BOM. Other character sets are not supported. |
2015-12-15 | ADDON-6189 | Support for collecting ELB access logs using the aws:elb:accesslogs . |
2015-12-14 | ADDON-6869 | Support for S3 buckets in the Frankfurt region with V4 signature only. |
2016-12-14 | ADDON-6866 | Improved auditing information for log enrichment. |
2015-12-14 | ADDON-6859 | S3 input blacklist has improved performance. |
2015-12-14 | ADDON-6857 | S3 input whitelist has improved performance. |
2015-12-14 | ADDON-6860 | Improved handling of process failures without duplication or loss of data. |
2015-12-14 | ADDON-6861 | Support for checkpoint deletion behavior for the S3 input to avoid running into collection limits. |
2015-12-14 | ADDON-6865 | Support for initial scan time in the S3 input, as well as in the new aws:s3:accesslogs , aws:cloudfront:accesslogs , and aws:elb:accesslogs source types. |
2015-12-14 | ADDON-6863 | Improved collection behavior in the S3 input: if the key is updated without content changes, the add-on indexes the key again. If the key is changed during data collection, the add-on starts over with the data collection. |
2015-12-14 | ADDON-6868 | The S3 input supports standard server-side KMS encrypted objects. |
2015-12-14 | ADDON-6855 | The S3 input supports bin files. |
2015-12-14 | ADDON-6852 | Improved performance for S3 input. Approximately 300% performance enhancement against 2.0.1 release. Over 8000% performance improvement for small files. See Performance reference for the S3 input in the Splunk Add-on for AWS for details. |
2015-12-14 | ADDON-6434 | UI support for configuring alternate source types within the S3 input. |
2015-12-14 | ADDON-6196 | Support for collecting CloudFront access logs with the aws:cloudfront:accesslogs source type. |
2015-12-14 | ADDON-6526 | S3 input recognizes and skips S3 buckets with contents that have been moved to Glacier. |
2015-12-14 | ADDON-6188 | New source type for S3 access logs: aws:s3:accesslogs . |
2015-12-03 | ADDON-6433 | Improvements to the Description input’s API and interval configuration UI. |
2015-12-01 | ADDON-6519 | Improved timeout behavior in the configuration UI. |
2015-11-26 | ADDON-6194 | Improvements to field aliasing for AWS regions. |
2015-11-26 | ADDON-6207 | Gather metadata through the Description input for EBS, VPC, Security Group, Subnet, Network ACL, Key Pairs, ELB, CloudFront, RDS. |
Fixed issues¶
Version 3.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date |
Defect number |
Description |
---|---|---|
2016-01-14 |
ADDON-7291 |
S3 data input only shows 30 entries at maximum. |
2016-01-03 |
ADDON-7258 |
Configuration screen needs to show better error message when user may be trying to use an invalid AWS account. |
2015-12-31 |
ADDON-7253 |
Default initial_scan_datetime should be ISO8601 instead of the current default of current time minus 7 days. |
2015-12-16 |
ADDON-7031 |
UI errors when using the base URL via reverse proxy. |
2015-12-15 |
ADDON-6754 |
Typo in aws_cloudtrail.py script throws critical error in aws_cloudtrail.log with "NameError: global name 'taaw' is not defined". |
2015-12-15 |
ADDON-7008 |
Add-on is not indexing ELB data through Description input. |
2015-12-14 |
ADDON-6308 |
S3 input should validate key name does not include invalid characters such as leading or trailing whitespace. |
2015-11-26 |
ADDON-6698 |
AWS Billing account ID should be payer's account ID instead of linked account ID. |
2015-12-22 |
ADDON-5491 |
The add-on configuration UI displays all regions instead of those within the selected account's permission scope. |
2015-12-20 |
ADDON-6958 / ADDON-5474 |
No detailed error shown while getting S3 buckets via REST endpoint with wrong proxy or account settings. |
2015-01-22 |
ADDON-3050/ |
S3 input is breaking lines incorrectly and inconsistently
indexing only partial events due to use of
|
2014-08-14 |
ADDON-1827 |
Checkpoints are not cleared after data inputs are removed or the add-on is uninstalled, thus if you create a new input with the same name as the deleted one, the add-on uses the checkpoint from the old input. |
Known issues¶
Version 3.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Date filed |
Defect number |
Description |
---|---|---|
2016-05-04 |
ADDON-9169 |
Monthly Billing is not indexed by using UTC timezone |
2016-04-28 |
ADDON-9145 |
Error message shown on input creation screen has logic issues and is not as specific as we could be |
2016-04-19 |
ADDON-8801 |
Billing initial scan time should not use last modified time of S3 key |
2016-04-15 |
ADDON-8721 |
Sourcetype="aws:cloudwatchlogs:vpcflow" handles src and dest incorrectly |
2016-04-11 |
ADDON-8686 |
S3 input UI cannot display custom source types when user edits the input. |
2016-04-03 |
ADDON-8547 |
S3 modular input loses data if new keys are generated during the key listing process |
2016-04-02 |
ADDON-8546 |
S3 logging is unclear, should include indication of which input stanza is involved. |
2016-03-31 |
ADDON-8548 |
Cloudwatch Collection failing with Failed to get proxy information Empty |
2016-03-15 |
ADDON-8299 |
S3 input cannot progress if keys are deleted during the data collection. |
2016-02-29 |
ADDON-8705 |
Add-on throws "is not JSON serializable" error when calling AWS API for ELB information |
2016-02-25 |
ADDON-7969 |
CloudWatch has performance problems in large AWS accounts. |
2016-02-24 |
ADDON-7957 |
Unnecessary tag expansion slows performance. |
2016-02-24 |
ADDON-7926 |
Default value of max_file_size_csv_zip_in_bytes is too small to handle large detailed billing reports |
2016-02-22 |
ADDON-7897 |
s3util.py list_cloudwatch_namespaces has performance issue |
2016-02-19 |
ADDON-7877 |
Upon upgrade from version 2.0.X, S3 inputs experience two
problems. |
2016-02-14 |
ADDON-7777 |
Not all fields are parsed for CloudFront |
2016-02-13 |
ADDON-7778 |
Cannot create new input when Splunk does not have a user named "admin" |
2016-02-13 |
ADDON-7776 |
CloudFront logs should be urldecoded |
2016-02-11 |
ADDON-7764 |
FIPS mode is not supported by this add-on. |
2016-01-25 |
ADDON-7573 |
CloudWatch input requests too many data points in long time windows. |
2016-01-18 |
ADDON-7701 |
CloudWatch fails to gather data when no metrics appear in a namespace for more than 12 hours. |
2016-01-13 |
ADDON-7448 |
In the Description data input, the port range defaults to null in vpc_network_acls if no range is specified, which is confusing, because it actually has a range of "all". |
2015-12-29 |
ADDON-7239 |
Using "/" in data input name causes exceptions. UI does not accept this character in the input names, but if you configure your input using conf files, you will find exceptions in logs. |
2015-12-22 |
ADDON-7160 |
Add-on throws a timeout error in the UI when user attempts to create a new S3 input, but successfully creates the input in the backend, causing errors if the user tries to create the same input again. |
2015-12-22 |
ADDON-7159 |
After removing all search peers, add-on still shows performance
warnings. |
2015-12-21 |
ADDON-7077 |
Infrequent Access storage type not supported |
2015-12-16 |
ADDON-7035 |
Add-on ingests the header line of the CloudFront access log, but it should be skipped. |
2015-11-26 |
ADDON-6701 |
EC2, RDS, ELB, and EC2 APIs do not consider pagination. |
2015-10-14 |
ADDON-6056 |
S3 logging errors on Windows. |
2015-10-13 |
ADDON-6043 |
SQS message mistakenly deleted when the add-on throws an error retrieving data from an S3 bucket. |
2015-09-11 |
ADDON-5500 |
Preconfigured reports for billing data cannot handle reports that have a mix of different currencies. The report will use the first currency found and apply that to all costs. |
2015-09-11 |
ADDON-5499 |
CloudWatch: Previous selected Metric namespace always exists in the list regardless of the region change |
2015-09-11 |
ADDON-5498 |
Unclear error: Unexpected error "<class 'socket.error'>" from python handler: " Connection refused" when user specifies all regions in CloudWatch for one namespace, saves the configuration, and reloads it. |
2015-09-10 |
ADDON-5481 |
The add-on configuration UI does not handle insufficient Splunk user permissions gracefully. |
2015-09-10 |
ADDON-5471 |
Deleting a CloudWatch data input takes too long. |
2015-09-10 |
ADDON-5469 |
Missing or improper default value for un-required fields. |
2015-09-07 |
ADDON-5355 |
Different error message for same error when creating duplicated data inputs. |
2015-09-06 |
ADDON-5354 |
Using keyboard to delete selections from configuration dropdown multi-select field causes drop-down list to appear in corner of screen. |
2015-09-01 |
ADDON-5309 |
UI default value is not read from default input config file |
2015-09-01 |
ADDON-5295 |
Description inconsistent in the GUI for CloudTrail service and CloudTrail from S3 service blacklist behavior. |
2015-08-31 |
ADDON-5212 |
Chrome highlights "misspelling" of configuration text in the GUI. |
2015-07-06 |
ADDON-6177 |
When tmp file system runs out of space, aws_billing.py fails with IOError: No space left on device. |
2015-04-02 |
ADDON-3578 |
S3: uppercase bucket names cause an error |
2015-03-25 |
ADDON-3460 |
On OSs (like Debian and Ubuntu) that use dash for shell scripts,
aws_cloudwatch.py spawns zombie processes. |
2014-09-28 |
ADDON-2135 |
The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. |
2014-09-26 |
ADDON-2118 |
Data inputs continue to work after user deletes the account used
for that input. |
2014-09-25 |
ADDON-2113 |
The app.conf file includes a stanza for a proxy server
configuration with a hashed password even if the user has not configured
a proxy or password. |
2014-09-16 |
ADDON-2029 |
In saved search "Monthly Cost till *" _time is displayed per day rather than per month. |
Third-party software attributions¶
Version 3.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
- Bootstrap
- boto - AWS for Python
- jqBootstrapValidation
- jquery-cookie
- Httplib2
- remote-pdb
- SortedContainers
- select2
Version 2.0.1¶
Version 2.0.1 of the Splunk Add-on for Amazon Web Services has the same compatibility specifications as version 3.0.0.
Fixed issues¶
Version 2.0.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Defect number | Description |
---|---|---|
2015-11-04 | ADDON-5813 | S3 input cannot handle bucket names with “.” in them. See “Add an S3 input for the Splunk Add-on for AWS” for details of the solution. |
2015-10-28 | ADDON-6125 | Add-on makes too many unnecessary get_log_event API calls, causing inefficiencies in environments with many spot instances. |
2015-10-26 | ADDON-5785 | Corrupt VPC Flow checkpointer file in race condition. |
2015-10-20 | ADDON-5612 | When CloudTrail userName is null, add-on coalesces the userName to “root” instead of “unknown”. |
2015-10-15 | ADDON-6004 | Add-on GUI does not allow user to select an index that is only defined on the indexers. |
2015-10-11 | ADDON-6003 | Incorrect regions shown in region drop-down list. |
2015-10-11 | ADDON-6001 | Config fails to fetch events from an S3 bucket in a different region. |
2015-10-09 | ADDON-5833 | AWS CloudWatch log formatting exception. |
2015-10-09 | ADDON-4505 | Cloudwatchlog deadlocks due to throttling exceptions when an input task includes a large number of log groups. |
2015-10-09 | ADDON-5782 | A corrupted checkpointer file for VPC Flow blocks other logstreams. |
Known issues¶
Version 2.0.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Date filed |
Defect number |
Description |
---|---|---|
2015-12-15 |
ADDON-7930 |
Data collection for Cloudwatch S3 metrics does not support wildcard in BucketName or array length > 1. |
2015-11-09 |
ADDON-6371 |
In some cases, Splunk Cloud does not save the AWS account credentials after they are correctly entered. Workaround: File a support request to redeploy the add-on and restart the instance. |
2015-10-14 |
ADDON-6056 |
S3 logging errors on Windows. |
2015-10-13 |
ADDON-6043 |
SQS message mistakenly deleted when the add-on throws an error retrieving data from an S3 bucket. |
2015-09-11 |
ADDON-5500 |
Preconfigured reports for billing data cannot handle reports that have a mix of different currencies. The report will use the first currency found and apply that to all costs. |
2015-09-11 |
ADDON-5499 |
CloudWatch: Previous selected Metric namespace always exists in the list regardless of the region change. |
2015-09-11 |
ADDON-5498 |
Unclear error message: Failed to load options for Metric namespace. Detailed Error: Unexpected error "<class 'socket.error'>" from python handler: "[Errno 111] Connection refused" when user specifies all regions in CloudWatch for one namespace, saves the configuration, and reloads it. |
2015-09-10 |
ADDON-5481 |
The add-on configuration UI does not handle insufficient Splunk user permissions gracefully. |
2015-09-10 |
ADDON-5474 |
No detailed error shown while getting S3 buckets via REST endpoint with wrong proxy or account settings. |
2015-09-10 |
ADDON-5471 |
Deleting a CloudWatch data input takes too long. |
2015-09-10 |
ADDON-5469 |
Missing or improper default value for un-required fields. |
2015-09-10 |
ADDON-5491 |
The add-on configuration UI displays all regions instead of those within the selected account's permission scope. |
2015-09-07 |
ADDON-5355 |
Different error message for same error when creating duplicated data inputs. |
2015-09-06 |
ADDON-5354 |
Using keyboard to delete selections from configuration dropdown multi-select field causes drop-down list to appear in corner of screen. |
2015-09-01 |
ADDON-5309 |
UI default value is not read from default input config file. |
2015-09-01 |
ADDON-5295 |
Description inconsistent in the GUI for CloudTrail service and CloudTrail from S3 service blacklist behavior. |
2015-08-31 |
ADDON-5212 |
Chrome highlights "misspelling" of configuration text in the GUI. |
2015-07-09 |
ADDON-3460 / |
On OSs (like Debian and Ubuntu) that use dash for shell scripts,
|
2015-07-06 |
ADDON-6177 |
aws_billing.py fails with IOError: [Errno 28] No space left on device. |
2015-04-03 |
ADDON-3578 |
Uppercase bucket name causes errors. |
2015-01-22 |
ADDON-3050/ |
S3 input is breaking lines incorrectly and inconsistently
indexing only partial events. |
2015-01-25 |
ADDON-3070 |
The add-on does not index the Configuration.State.Code change from SQS that is reported to users on the AWS Config UI. Splunk Enterprise only indexes configuration snapshots from S3 as new events, and only after a "ConfigurationHistoryDeliveryCompleted" notification is recieved by SQS. |
2014-09-26 |
ADDON-2118 |
Data inputs continue to work after user deletes the account used for that input. Workaround: Restart Splunk Enterprise after deleting or modifying an AWS account. |
2014-09-28 |
ADDON-2135 |
The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. |
2014-09-26 |
ADDON-2116/ |
On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. |
2014-09-25 |
ADDON-2113 |
The |
2014-09-16 |
ADDON-2029 |
In saved search "Monthly Cost till *" _time is displayed per day rather than per month. |
2014-09-09 |
ADDON-1983 / |
Errors can occur in checkpointing if modular input
|
2014-08-26 |
ADDON-1919 |
If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. |
2014-08/17 |
ADDON-1854 |
After initial configuration, adjusting Max trackable items might cause data loss. |
2014-08-14 |
ADDON-1827 |
Checkpoints are not cleared after data inputs are removed or the add-on is uninstalled, thus if you create a new input with the same name as the deleted one, the add-on uses the checkpoint from the old input. Workaround: create unique input names to avoid picking up old checkpoint files. |
Third-party software attributions¶
Version 2.0.1 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Version 2.0.0¶
Version 2.0.0 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk platform versions | 6.3, 6.2 |
CIM | 4.0 and above |
Platforms | Platform independent |
Vendor Products | AWS CloudTrail, CloudWatch, CloudWatch Logs, Config, Billing, S3 |
New features¶
Version 2.0.0 of the Splunk Add-on for Amazon Web Services has the following new features.
Resolved date | Defect number | Description |
---|---|---|
2015-09-08 | ADDON-1671 | New configuration UI. |
2015-09-08 | ADDON-2126 / ADDON-5466 | Ability to manually enter S3 bucket names, SQS queue names, and metric namespaces in Splunk Web fields, in case connection to AWS is poor or user account lacks permissions to list buckets. |
2015-07-14 | ADDON-4543 | Added unified field for AWS account ID across all data inputs: aws_account_id . |
2015-07-06 | ADDON-3189 | Currency field added to AWS billing report data, allowing users to more accurately judge financial impact. |
2015-07-03 | ADDON-4260 / ADDON-1665 | Support for data ingestion from AWS CloudWatch Logs service, including VPC Flow Logs. |
2015-07-03 | ADDON-4259 | CIM mapping for VPC Flow Logs data. |
2015-06-30 | ADDON-4158 | Support for Config snapshot collection. |
2015-06-29 | ADDON-2364 | Support for collecting archives of CloudTrail data via S3 buckets by configuring the sourcetype aws:cloudtrail in an S3 input. |
2015-06-29 | ADDON-4413 | Support for multiple regions in a single CloudWatch input. |
2015-06-29 | ADDON-3235 | Support for disabling SSL proxies using the is_secure parameter in local/aws_global_settings.conf to alter the behavior of connections to AWS. |
2015-06-29 | ADDON-4180 | Support for inventory metadata collection from AWS. |
Fixed issues¶
Version 2.0.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Defect number | Description |
---|---|---|
2015-09-14 | ADDON-5158 | CloudTrail data missing some CIM tagging. |
2015-08-31 | ADDON-5200 | CloudWatch input calls AWS API inefficiently, using separate API call for each instance-metric combination. |
2015-08-31 | ADDON-2006 | Unfriendly error message when user specifies invalid account. |
2015-08-31 | ADDON-1932 | Unfriendly error message when configuring proxy incorrectly. |
2015-08-31 | ADDON-1926 | Splunk Web allows you to update and delete an AWS account for the add-on simultaneously. |
2015-09-09 | ADDON-4822 / CO-4912 | Some instances of Splunk Cloud show blank screens for all data input pages. Workaround: Set up a heavy forwarder on-prem to handle data inputs and forward the data to Splunk Cloud. |
Known issues¶
Version 2.0.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Date filed |
Defect number |
Description |
---|---|---|
2015-10-14 |
ADDON-6056 |
S3 logging errors on Windows. |
2015-10-13 |
ADDON-6043 |
SQS message mistakenly deleted when the add-on throws an error retrieving data from an S3 bucket. |
2015-10-09 |
ADDON-6004 |
Add-on GUI does not allow user to select an index that is only defined on the indexers. |
2015-10-09 |
ADDON-6003 |
Incorrect regions shown in region drop-down list. |
2015-10-09 |
ADDON-6001 |
Confg fails to fetch events from an S3 bucket in a different region. |
2015-10-03 |
ADDON-5833 |
AWS CloudWatch log formatting exception. |
2015-09-28 |
ADDON-5813 |
S3 input cannot handle bucket names with "." in them. |
2015-09-24 |
ADDON-5785 |
Corrupt VPC Flow checkpointer file in race condition. |
2015-09-24 |
ADDON-5782 |
A corrupted checkpointer file for VPC Flow blocks other logstreams. |
2015-09-17 |
ADDON-5612 |
When CloudTrail userName is null, add-on coalesces the userName to "root" instead of "unknown". |
2015-09-11 |
ADDON-5500 |
Preconfigured reports for billing data cannot handle reports that have a mix of different currencies. The report will use the first currency found and apply that to all costs. |
2015-09-11 |
ADDON-5499 |
CloudWatch: Previous selected Metric namespace always exists in the list regardless of the region change. |
2015-09-11 |
ADDON-5498 |
Unclear error message: Failed to load options for Metric namespace. Detailed Error: Unexpected error "<class 'socket.error'>" from python handler: "[Errno 111] Connection refused" when user specifies all regions in CloudWatch for one namespace, saves the configuration, and reloads it. |
2015-09-10 |
ADDON-5481 |
The add-on configuration UI does not handle insufficient Splunk user permissions gracefully. |
2015-09-10 |
ADDON-5491 |
The add-on configuration UI displays all regions instead of those within the selected account's permission scope. |
2015-09-10 |
ADDON-5474 |
No detailed error shown while getting S3 buckets via REST endpoint with wrong proxy or account settings. |
2015-09-10 |
ADDON-5471 |
Deleting a CloudWatch data input takes too long. |
2015-09-10 |
ADDON-5469 |
Missing or improper default value for un-required fields. |
2015-09-07 |
ADDON-5355 |
Different error message for same error when creating duplicated data inputs. |
2015-09-06 |
ADDON-5354 |
Using keyboard to delete selections from configuration dropdown multi-select field causes drop-down list to appear in corner of screen. |
2015-09-01 |
ADDON-5309 |
UI default value is not read from default input config file. |
2015-09-01 |
ADDON-5295 |
Description inconsistent in the GUI for CloudTrail service and CloudTrail from S3 service blacklist behavior. |
2015-08-31 |
ADDON-5212 |
Chrome highlights "misspelling" of configuration text in the GUI. |
2015-07-10 |
ADDON-4505 |
Cloudwatchlog deadlocks due to throttling exceptions when an input task includes a large number of log groups. |
2015-07-09 |
ADDON-3460 / CO-4749 / SPL-55904 |
On OSs (like Debian and Ubuntu) that
use dash for shell scripts, |
2015-07-06 |
ADDON-6177 |
aws_billing.py fails with IOError: [Errno 28] No space left on device. |
2015-04-03 |
ADDON-3578 |
Uppercase bucket name causes errors. |
2015-01-22 |
ADDON-3050/ |
S3 input is breaking lines
incorrectly and inconsistently indexing only partial events. |
2015-01-25 |
ADDON-3070 |
The add-on does not index the Configuration.State.Code change from SQS that is reported to users on the AWS Config UI. Splunk Enterprise only indexes configuration snapshots from S3 as new events, and only after a "ConfigurationHistoryDeliveryCompleted" notification is recieved by SQS. |
2014-09-26 |
ADDON-2118 |
Data inputs continue to work after user deletes the account used for that input. Workaround: Restart Splunk Enterprise after deleting or modifying an AWS account. |
2014-09-28 |
ADDON-2135 |
The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. |
2014-09-26 |
ADDON-2116/ |
On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. |
2014-09-25 |
ADDON-2113 |
The |
2014-09-16 |
ADDON-2029 |
In saved search "Monthly Cost till *" _time is displayed per day rather than per month. |
2014-09-09 |
ADDON-1983 / ADDON-1938 / SPL-81771 |
Errors can occur in checkpointing if
modular input |
2014-08-26 |
ADDON-1919 |
If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. |
2014-08/17 |
ADDON-1854 |
After initial configuration, adjusting Max trackable items might cause data loss. |
2014-08-14 |
ADDON-1827 |
Checkpoints are not cleared after data inputs are removed or the add-on is uninstalled, thus if you create a new input with the same name as the deleted one, the add-on uses the checkpoint from the old input. Workaround: create unique input names to avoid picking up old checkpoint files. |
Third-party software attributions¶
Version 2.0.0 of the Splunk Add-on for Amazon Web Services incorporates the following third-party libraries.
Version 1.1.1¶
Version 1.1.1 of the Splunk Add-on for Amazon Web Services is compatible with the following software, CIM versions, and platforms.
Splunk Enterprise versions | 6.2, 6.1 |
CIM | 4.2, 4.1, 4.0 |
Platforms | Platform independent |
Vendor Products | AWS Billing, CloudTrail, CloudWatch, Config, S3 |
Fixed issues¶
Version 1.1.1 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Defect number | Description |
04/24/15 | ADDON-3512 | Timeout error on new account definition. Users can now set splunkdConnectionTimeout = 3000 in $SPLUNK_HOME/etc/system/local/web.conf to avoid setup timeout problems. |
04/21/15 | ADDON-3612 | Add-on cannot parse multi-account message format from SQS and CloudTrail. |
04/21/15 | ADDON-3577 | Input configuration timeout on retrieving bucket/key list from S3. |
03/01/15 | ADDON-3119 | Add-on fails to collect payloads from GovCloud region. |
Known issues¶
Version 1.1.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Date |
Defect number |
Description |
08/27/15 |
ADDON-5158 |
CloudTrail data missing some CIM tagging. |
08/06/15 |
ADDON-4822 / CO-4581 |
Some instances of Splunk Cloud show blank screens for all data input pages. Workaround: Set up a heavy forwarder on-prem to handle data inputs and forward the data to Splunk Cloud. |
04/10/15 |
ADDON-3652 |
Billing reports are not performant. |
04/03/15 |
ADDON-3578 |
Uppercase bucket name causes errors. |
01/22/15 |
ADDON-3050/ |
S3 input is breaking lines
incorrectly and inconsistently indexing only partial events. |
01/25/15 |
ADDON-3070 |
The add-on does not index the Configuration.State.Code change from SQS that is reported to users on the AWS Config UI. Splunk Enterprise only indexes configuration snapshots from S3 as new events, and only after a "ConfigurationHistoryDeliveryCompleted" notification is recieved by SQS. |
01/06/15 |
ADDON-2910 |
Splunk Cloud customers cannot access props.conf to configure line breaking on S3 events. |
10/10/14 |
ADDON-2154 |
Billing input data has a non-ISO-8601
timestamp appended to the source field of each event. Workaround: Add a
new field named "source2" in the suggested format:
|
09/26/14 |
ADDON-2118 |
Data inputs continue to work after user deletes the account used for that input. Workaround: Restart Splunk Enterprise after deleting or modifying an AWS account. |
09/28/14 |
ADDON-2135 |
The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. |
09/26/14 |
ADDON-2116/ |
On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. |
09/26/14 |
ADDON-2115 |
If user does not provide a friendly name when configuring an AWS account in the setup screen, account is not configured but no error message appears |
09/25/14 |
ADDON-2113 |
The |
09/25/14 |
ADDON-2110 |
In Splunk 6.2, when network is unstable, some input configuration fields fail to display in Splunk Web and no error message is shown. |
09/16/14 |
ADDON-2029 |
In saved search "Monthly Cost till *" _time is displayed per day rather than per month. |
09/11/14 |
ADDON-2006 |
Unfriendly error message when user specifies invalid account. |
09/09/14 |
ADDON-1983 |
If Splunk Enterprise restarts while indexing S3 data, data duplication might occur. Workaround: Use AWS command line tools. |
08/28/14 |
ADDON-1938 |
Checkpoint and retry time do not log correctly when Splunkd stops. |
08/28/14 |
ADDON-1932 |
Unfriendly error message when configuring proxy incorrectly. |
08/26/14 |
ADDON-1926 |
Splunk Web allows you to update and delete an AWS account for the add-on simultaneously. |
08/26/14 |
ADDON-1919 |
If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. |
08/24/14 |
ADDON-1895 |
If user tries to update a billing report manually using Microsoft Excel, the add-on cannot process the modified file and throws "failed to parse key" error. |
08/21/14 |
ADDON-1885 |
Splunk Enterprise does not validate Amazon Web Services credentials during add-on setup. |
08/17/14 |
ADDON-1854 |
After initial configuration, adjusting Max trackable items might cause data loss. |
08/14/14 |
ADDON-1827 |
Checkpoints are not cleared after data inputs are removed or the add-on is uninstalled, thus if you create a new input with the same name as the deleted one, the add-on uses the checkpoint from the old input. Workaround: create unique input names to avoid picking up old checkpoint files. |
03/12/14 |
SPL-81771 |
Errors can occur in checkpointing if
modular input |
Third-party software attributions¶
Version 1.1.1 of the Splunk Add-on for Amazon Web Services incorporates boto - AWS for Python.
Version 1.1.0¶
Version 1.1.0 had the same compatibility specifications as Version 1.1.1.
New features¶
Version 1.1.0 of the Splunk Add-on for Amazon Web Services has the following new features.
Date | Issue number | Description |
02/12/15 | ADDON-3148 | Support for the SNS Subscription attributes for Raw Message Delivery for AWS Config and CloudTrail. |
02/09/15 | ADDON-1644 | Pre-built panels for CloudWatch, CloudTrail, and Billing data. |
12/18/14 | ADDON-2678 | Allow users to configure the log level. |
11/12/14 | ADDON-2202 | New modular input for AWS Config data. |
Fixed issues¶
Version 1.1.0 of the Splunk Add-on for Amazon Web Services fixes the following, if any, issues.
Resolved date | Defect number | Description |
02/11/15 | ADDON-2533 | Internal logs are source typed as “this-too-small”. |
02/10/15 | ADDON-2679 | Process for fetching logs runs in a loop. |
02/09/15 | ADDON-3154 | Support AssumedRole user name for CloudTrail. |
Known issues¶
Version 1.1.0 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
Date | Defect number | Description |
01/22/15 | ADDON-3050 | S3 input is breaking lines incorrectly. |
01/25/15 | ADDON-3070 | The add-on does not index the Configuration.State.Code change from SQS that is reported to users on the AWS Config UI. Splunk Enterprise only indexes configuration snapshots from S3 as new events, and only after a “ConfigurationHistoryDeliveryCompleted” notification is recieved by SQS. |
01/06/15 | ADDON-2910 | Splunk Cloud customers cannot access props.conf to configure line breaking on S3 events. |
09/28/14 | ADDON-2135 | The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. |
09/26/14 | ADDON-2116 | On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. |
09/26/14 | ADDON-2115 | If user does not provide a friendly name when configuring an AWS account in the setup screen, account is not configured but no error message appears |
09/25/14 | ADDON-2113 | The app.conf file includes a stanza for a proxy server configuration with a hashed password even if the user has not configured a proxy or password. This behavior is expected because Splunk Enterprise automatically sets the proxy field to 0 and saves an encrypted entry in app.conf . |
09/25/14 | ADDON-2110 | In Splunk 6.2, when network is unstable, some input configuration fields fail to display in Splunk Web and no error message is shown. |
09/16/14 | ADDON-2029 | In saved search “Monthly Cost till *” _time is displayed per day rather than per month. |
09/11/14 | ADDON-2006 | Unfriendly error message when user specifies invalid account. |
09/09/14 | ADDON-1983 | If Splunk Enterprise restarts while indexing S3 data, data duplication might occur. Workaround: Use AWS command line tools. |
08/28/14 | ADDON-1938 | Checkpoint and retry time do not log correctly when Splunkd stops. |
08/28/14 | ADDON-1932 | Unfriendly error message when configuring proxy incorrectly. |
08/26/14 | ADDON-1926 | Splunk Web allows you to update and delete an AWS account for the add-on simultaneously. |
08/26/14 | ADDON-1919 | If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. |
08/24/14 | ADDON-1895 | If user tries to update a billing report manually using Microsoft Excel, the add-on cannot process the modified file and throws “failed to parse key” error. |
08/21/14 | ADDON-1885 | Splunk Enterprise does not validate Amazon Web Services credentials during add-on setup. |
08/17/14 | ADDON-1854 | After initial configuration, adjusting Max trackable items might cause data loss. |
08/14/14 | ADDON-1827 | Checkpoints are not cleared after data inputs are removed or the add-on is uninstalled, thus if you create a new input with the same name as the deleted one, the add-on uses the checkpoint from the old input. Workaround: create unique input names to avoid picking up old checkpoint files. |
03/12/14 | SPL-81771 | Errors can occur in checkpointing if modular input stdout is prematurely closed during termination. |
Third-party software attributions¶
Version 1.1.0 of the Splunk Add-on for Amazon Web Services incorporates boto - AWS for Python.
Version 1.0.1¶
Version 1.0.1 of the Splunk Add-on for Amazon Web Services was compatible with the following software, CIM versions, and platforms.
Splunk Enterprise versions | 6.2, 6.1 |
CIM | 4.1, 4.0, 3.0 |
Platforms | Platform independent |
Vendor Products | AWS Billing, CloudTrail, CloudWatch, S3 |
Fixed issues¶
Version 1.0.1 of the Splunk Add-on for Amazon Web Services fixed the following issues.
Resolved date | Defect number | Description |
12/16/14 | ADDON-2530 | New version of boto library required to support eu-central-1 region. |
12/11/14 | ADDON-2359 | Unexpected SQS messages can block inputs. |
Known issues¶
Version 1.0.1 of the Splunk Add-on for Amazon Web Services has the following, if any, known issues.
- Internal log files are incorrectly sourcetyped as N-too-small. (ADDON-2533)
- Errors can occur in checkpointing if modular input
stdout
is prematurely closed during termination. (SPL-81771) - After initial configuration, adjusting Max trackable items might cause data loss. (ADDON-1854)
- Splunk Enterprise does not validate Amazon Web Services credentials during add-on setup. (ADDON-1885)
- If user tries to update a billing report manually using Microsoft Excel, the add-on cannot process the modified file and throws “failed to parse key” error. (ADDON-1895)
- If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. (ADDON-1919)
- Splunk Web allows you to update and delete an AWS account for the add-on simultaneously. (ADDON-1926)
- Setup and configuration pages in Splunk Web give unfriendly error messages when given invalid inputs (ADDON-1932, ADDON-2006)
- If Splunk Enterprise restarts while indexing S3 data, data duplication might occur. Workaround: Use AWS command line tools. (ADDON-1983 and ADDON-1938)
- In saved search “Monthly Cost till *” _time is displayed per day rather than per month. (ADDON-2029)
- The
app.conf
file includes a stanza for a proxy server configuration with a hashed password even if the user has not configured a proxy or password. This behavior is expected because Splunk Enterprise automatically sets the proxy field to 0 and saves an encrypted entry inapp.conf
. (ADDON-2113) - If user does not provide a friendly name when configuring an AWS account in the setup screen, account is not configured but no error message appears (ADDON-2115)
- On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. (ADDON-2116)
- The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. (ADDON-2135)
- In Splunk 6.2, when network is unstable, some input configuration fields fail to display in Splunk Web and no error message is shown. (ADDON-2110)
Third-party software attributions¶
Version 1.0.1 of the Splunk Add-on for Amazon Web Services incorporated boto - AWS for Python.
Version 1.0.0¶
Version 1.0.0 of the Splunk Add-on for Amazon Web Services had the same compatibility specifications as version 1.0.1.
Known issues¶
Version 1.0.0 of the Splunk Add-on for Amazon Web Services had the following known issues:
- Errors can occur in checkpointing if modular input
stdout
is prematurely closed during termination. (SPL-81771) - After initial configuration, adjusting Max trackable items might cause data loss. (ADDON-1854)
- Splunk Enterprise does not validate Amazon Web Services credentials during add-on setup. (ADDON-1885)
- If user tries to update a billing report manually using Microsoft Excel, the add-on cannot process the modified file and throws “failed to parse key” error. (ADDON-1895)
- If a user changes the configuration to use a different AWS account, Splunk Web continues to list buckets for the previously configured account. (ADDON-1919)
- Splunk Web allows you to update and delete an AWS account for the add-on simultaneously. (ADDON-1926)
- Setup and configuration pages in Splunk Web give unfriendly error messages when given invalid inputs (ADDON-1932, ADDON-2006)
- If Splunk Enterprise restarts while indexing S3 data, data duplication might occur. Workaround: Use AWS command line tools. (ADDON-1983 and ADDON-1938)
- In saved search “Monthly Cost till *” _time is displayed per day rather than per month. (ADDON-2029)
- The
app.conf
file includes a stanza for a proxy server configuration with a hashed password even if the user has not configured a proxy or password. This behavior is expected because Splunk Enterprise automatically sets the proxy field to 0 and saves an encrypted entry inapp.conf
. (ADDON-2113) - If user does not provide a friendly name when configuring an AWS account in the setup screen, account is not configured but no error message appears (ADDON-2115)
- On Windows 2012, Splunk Web shows a timeout error when a user attempts to add or delete an AWS account on the setup page. Workaround: Refresh the page. (ADDON-2116)
- The list of regions shown in inputs configuration in Splunk Web shows all Amazon regions regardless of the permissions associated with the selected AWS account. (ADDON-2135)
- In Splunk 6.2, when network is unstable, some input configuration fields fail to display in Splunk Web and no error message is shown. (ADDON-2110)
Third-party software attributions¶
Version 1.0.0 of the Splunk Add-on for Amazon Web Services incorporated boto - AWS for Python.