Sizing, performance, and cost considerations for the Splunk Add-on for AWS¶
Before you configure the Splunk Add-on for Amazon Web Services (AWS), review these sizing, performance, and cost considerations.
General¶
See the following table for the recommended maximum daily indexing volume on a clustered indexer for different AWS source types. This information is based on a generic Splunk hardware configuration. Adjust the number of indexers in your cluster based on your actual system performance. Add indexers to a cluster to improve indexing and search retrieval performance. Remove indexers to a cluster to avoid within-cluster data replication traffic.
Source type | Daily indexing volume per indexer (GB) |
---|---|
aws:cloudwatchlogs:vpcflow | 25-30 |
aws:s3:accesslogs | 80- 20 |
aws:cloudtrail | 150-200 |
aws:billing | 50- 00 |
These sizing recommendations are based on the Splunk platform hardware configurations in the following table. You can also use the System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual as a reference.
Splunk platform type | CPU cores | RAM | EC2 instance type |
---|---|---|---|
Search head | 8 | 16 GB | c4.xlarge |
Indexer | 16 | 64 GB | m4.4xlarge |
Input configuration screens require data transfer from AWS to populate the services, queues, and buckets available to your accounts. If your network to AWS is slow, data transfers might be slow to load. If you encounter timeout issues, you can manually type in resource names.
Performance for the Splunk Add-on for AWS data inputs¶
The rate of data ingestion for this add-on depends on several factors: deployment topology, number of keys in a bucket, file size, file compression format, number of events in a file, event size, and hardware and networking conditions.
See the following tables for measured throughput data achieved under certain operating conditions. Use the information to optimize the Splunk Add-on for AWS add-on in your own production environment. Because performance varies based on user characteristics, application usage, server configurations, and other factors, specific performance results cannot be guaranteed. Contact Splunk Support for accurate performance tuning and sizing.
The Kinesis input for the Splunk Add-on for AWS has its own performance data. See Configure Kinesis inputs for the Splunk Add-on for AWS.
Reference hardware and software environment¶
Throughput data and conclusions are based on performance testing using Splunk platform instances (dedicated heavy forwarders and indexers) running on the following environment:
Instance type | M4 Double Extra Large (m4.4xlarge) |
Memory | 64 GB |
Compute Units (ECU) | 53.5 |
vCPU | 16 |
Storage (GB) | 0 (EBS only) |
Arch | 64-bit |
EBS optimized (max bandwidth) | 2000 Mbps |
Network performance | High |
The following settings are configured in the outputs.conf file on the heavy forwarder:
useACK = true
maxQueueSize = 15MB
Measured performance data¶
The throughput data is the maximum performance for each single input achieved in performance testing under specific operating conditions and is subject to change when any of the hardware and software variables changes. Use this data as a rough reference only.
Single-input max throughput¶
Data input | Source type | Max throughput (KBs) | Max EPS (events) | Max throughput (GB/day) |
---|---|---|---|---|
Generic S3 | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
17,000 | 86,000 | 1,470 |
Generic S3 | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
11,000 | 35,000 | 950 |
Incremental S3 | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
11,000 | 43,000 | 950 |
Incremental S3 | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
7,000 | 10,000 | 600 |
SQS-based S3 | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
12,000 | 50,000 | 1,000 |
SQS-based S3 | aws:elb:accesslogs (gz, syslog, event size 250 B, S3 key size 2 MB) |
24,000 | 100,000 | 2,000 |
SQS-based S3 | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
13,000 | 19,000 | 1,100 |
CloudWatch logs [1] | aws:cloudwatchlog:vpcflow | 1,000 | 6,700 | 100 |
CloudWatch (ListMetric, 10,000 metrics) |
aws:cloudwatch | 240 (Metrics) | NA | NA |
CloudTrail | aws:cloudtrail (gz, json, sqs=1,000, 9,000 events/key) |
5,000 | 7,000 | 400 |
Kinesis | aws:cloudwatchlog:vpcflow (json, 10 shards) |
15,000 | 125,000 | 1,200 |
SQS | aws:sqs (json, event size 2,800) |
N/A | 160 | N/A |
[1] API throttling error occurs if input streams are greater than 1,000.
Multi-inputs max throughput¶
The following throughput data was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment.
Consolidate AWS accounts during add-on configuration to reduce CPU usage and increase throughput performance.
Data input | Source type | Max throughput (KBs) | Max EPS (events) | Max throughput (GB/day) |
---|---|---|---|---|
Generic S3 | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
23,000 | 108,000 | 1,980 |
Generic S3 | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
45,000 | 130,000 | 3,880 |
Incremental S3 | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
34,000 | 140,000 | 2,930 |
Incremental S3 | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
45,000 | 65,000 | 3,880 |
SQS-based S3 [1] | aws:elb:accesslogs (plain text, syslog, event size 250 B, S3 key size 2 MB) |
35,000 | 144,000 | 3,000 |
SQS-based S3 [1] | aws:elb:accesslogs (gz, syslog, event size 250 B, S3 key size 2 MB) |
42,000 | 190,000 | 3,600 |
SQS-based S3 [1] | aws:cloudtrail (gz, json, event size 720 B, S3 key size 2 MB) |
45,000 | 68,000 | 3,900 |
CloudWatch logs | aws:cloudwatchlog:vpcflow | 1,000 | 6,700 | 100 |
CloudWatch (ListMetric) | aws:cloudwatch (10,000 metrics) |
240 (metrics/s) | NA | NA |
CloudTrail | aws:cloudtrail (gz, json, sqs=100, 9,000 events/key) |
20,000 | 15,000 | 1,700 |
Kinesis | aws:cloudwatchlog:vpcflow (json, 10 shards) |
18,000 | 154,000 | 1,500 |
SQS | aws:sqs (json, event size 2.8K) |
N/A | 670 | N/A |
[1] Performance testing of the SQS-based S3 input indicates that optimal performance throughput is reached when running four inputs on a single heavy forwarder instance. To achieve higher throughput performance beyond this bottleneck, you can further scale out data collection by creating multiple heavy forwarder instances each configured with up to four SQS-based S3 inputs to concurrently ingest data by consuming messages from the same SQS queue.
Max inputs benchmark per heavy forwarder¶
The following input number ceiling was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment. CPU and memory resources were utilized to their fullest.
It is possible to configure more inputs than the maximum number indicated in the table if you have a smaller event size, fewer keys per bucket, or more available CPU and memory resources in your environment.
Data input | Sourcetype | Format | Number of keys/bucket | Event size | Max inputs |
---|---|---|---|---|---|
S3 | aws:s3 | zip, syslog | 100,000 | 100 B | 300 |
S3 | aws:cloudtrail | gz, json | 1,300,000 | 1 KB | 30 |
Incremental S3 | aws:cloudtrail | gz, json | 1,300,000 | 1 KB | 20 |
SQS-based S3 | aws:cloudtrail, aws:config | gz, json | 1,000,000 | 1 KB | 50 |
Memory usage benchmark for generic S3 inputs¶
Event size | Number of events per key | Total number of keys | Archive type | Number of inputs | Memory used |
---|---|---|---|---|---|
1,000 | 1,000 | 10,000 | zip | 20 | 20 G |
1,000 | 1,000 | 1,000 | zip | 20 | 12 G |
1,000 | 1,000 | 10,000 | zip | 10 | 18 G |
100 B | 1,000 | 10,000 | zip | 10 | 15 G |
If you do not achieve the expected AWS data ingestion throughput, see Troubleshoot the Splunk Add-on for AWS.
Billing¶
The following table provides general guidance on sizing, performance, and cost considerations for the Billing data input:
Consideration | Notes |
---|---|
Sizing and performance | Detailed billing reports can be very large in size, depending on your environment. If you configure the add-on to collect detailed reports, it collects all historical reports available in the bucket by default. In addition, for each newly finalized monthly and detailed report, the add-on collects new copies of the same report once per interval until the etag is unchanged. Configure separate inputs for each billing report type that you want to collect. Use the regex and interval parameters in the input configuration page of the add-on to limit the number of reports that you collect with each input. |
AWS cost | Billing reports themselves do not incur charges, but standard S3 charges apply. See. |
Billing Cost and Usage Report¶
The following table provides general guidance on sizing, performance, and cost considerations for the Billing Cost and Usage Report data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
Splunk Platform Environment | Architecture setup | Number of inputs | Event Count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~8mins | 12.13% | 0.20% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~11mins | 12.09% | 0.21% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 2,828,618 | ~6mins | 12.01% | 0.20% |
CloudTrail¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudTrail data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using CloudTrail itself does not incur charges, but standard S3, SNS, and SQS charges apply. See |
Config¶
The following table provides general guidance on sizing, performance, and cost considerations for the Config data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using Config incurs charges from AWS. See. In addition, standard S3, SNS, and SQS charges apply. See |
Config Rules¶
The following table provides general guidance on sizing, performance, and cost considerations for the Config Rules data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | None. |
CloudWatch¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch data input:
Consideration | Notes |
---|---|
Sizing and performance | The smaller the granularity you configure, the more events you collect. Create separate inputs that match your needs for different regions, services, and metrics. For each input, configure a granularity that matches the precision that you require, setting a larger granularity value in cases where indexing fewer, less-granular event s is acceptable. You can increase granularity temporarily when a problem is detected. AWS rate-limits the number of free API calls against the CloudWatch API. With a period of 300 and a polling interval of 1,800, collecting data for 2 million metrics does not, by itself, exceed the current default rate limit, but collecting 4 million metrics does exceed it. If you have millions of metrics to collect in your environment, consider paying to have your API limit raised, or remove less essential metrics from your input and configure larger granularities in order to make fewer API calls. |
AWS cost | Using CloudWatch and making requests against the CloudWatch API incurs charges from AWS. See. |
CloudWatch Logs (VPC Flow Logs)¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch Logs (VPC Flow Logs) data input:
Consideration | Notes |
---|---|
Sizing and performance | AWS limits each account to 10 requests per second, each of which returns no more than 1 MB of data. In other words, the data ingestion and indexing rate is no more than 10 MB/s. The add-on modular input can process up to 4,000 events per second in a single log stream Best practices:
|
AWS cost | Using CloudWatch Logs incurs charges from AWS. See. Transferring data out of CloudWatch Logs incurs charges from AWS. See |
CloudWatch Metrics¶
The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch Metrics data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
The Number of API calls means m*n where m is the number of unique metric dimensions and n is the number of unique metric names.
Splunk Platform Environment | Architecture setup | Number of inputs | Number of API calls | Event Count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~28mins | 16.03% | 1.67% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~35mins | 14.89% | 1.83% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB ram) | 1 | 200000 | 400000 | ~32mins | 15.09% | 1.70% |
Incremental S3¶
The following table provides general guidance on sizing, performance, and cost considerations for the Incremental S3 data input. Testing was conducted using version 7.1.0 of the Splunk Add-on for AWS.
Splunk Platform Environment | Architecture setup | Number of inputs | Event count | Data Collection time | Max CPU % | Max RAM % |
---|---|---|---|---|---|---|
Customer Managed Platform (CMP) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~104mins | 0.05% | 0.10% |
Classic Inputs Data Manager (IDM) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~105mins | 0.21% | 0.11% |
Victoria Search Head Cluster (SHC) | m5.4xlarge (vcpu 16 / 64 GiB RAM) | 1 | 10491968 | ~104mins | 0.02% | 0.11% |
Inspector¶
The following table provides general guidance on sizing, performance, and cost considerations for the Inspector data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using Amazon Inspector incurs charges from AWS. See https://aws.amazon.com/inspector/pricing/. |
Kinesis¶
The following table provides general guidance on sizing, performance, and cost considerations for the Kinesis data input:
Consideration | Notes |
---|---|
Sizing and performance | See Performance reference for the Kinesis input in the Splunk Add-on for AWS. |
AWS cost | Using Amazon Kinesis incurs charges from AWS. See https://aws.amazon.com/kinesis/streams/pricing/. |
S3¶
The following table provides general guidance on sizing, performance, and cost considerations for the S3 data input:
Consideration | Notes |
---|---|
Sizing and performance | AWS throttles S3 data collection at the bucket level, so expect some delay before all data arrives in your Splunk platform. You can configure multiple S3 inputs for a single S3 bucket to improve performance. The Splunk platform dedicates one process for each data input, so provided that your system has sufficient processing power, performance improves with multiple inputs. See Performance reference for the S3 input in the Splunk Add-on for AWS for details |
AWS cost | Using S3 incurs charges from AWS. See. |
Security Lake¶
The following tables provide general guidance on sizing, performance, and cost considerations for the Amazon Security Lake data input. Files ranging in size from 20KB to 200MB were used to collect the performance stats.
Splunk Platform Environment | Architecture setup | Number of indexers | Number of inputs | Batch size | Heavy forwarder/IDM CPU % | Heavy forwarder/IDM RAM % | Expected Average Throughput Indexed |
---|---|---|---|---|---|---|---|
Customer Managed Platform (CMP) |
|
N/A | 1 | 5 | 11.77% | 3.37% | 3.33 GB/h |
Splunk Cloud Classic |
|
3 | 1 | 5 | 99.90% | 22.06% | 2.82 GB/h |
Splunk Cloud Victoria |
|
3 | 1 | 5 | 54.28% | 22.78% | 2.58 GB/h |
Customer Managed Platform (CMP) |
|
N/A | 2 | 5 | 9.93% | 3.20% | 7.72 GB/h |
Splunk Cloud Classic |
|
6 | 2 | 5 | 99.95% | 22.71% | 5.60 GB/h |
Splunk Cloud Victoria |
|
6 | 2 | 5 | 55.68% | 24.13% | 5.28 GB/h |
Customer Managed Platform (CMP) |
|
N/A | 5 | 5 | 85.42% | 13.65% | 277 GB/h |
Splunk Cloud Classic |
|
9 | 5 | 5 | 99.95% | 27.29% | 96.93 GB/h |
Splunk Cloud Victoria |
|
9 | 5 | 5 | 66.45% | 21.18% | 214 GB/h |
Customer Managed Platform (CMP) |
|
N/A | 1 | 10 | 10.03% | 3.07% | 5.07 GB/h |
Splunk Cloud Classic |
|
3 | 1 | 10 | 99.95% | 23.20% | 5.32 GB/h |
Splunk Cloud Victoria |
|
3 | 1 | 10 | 54.69% | 21.14% | 5.22 GB/h |
Customer Managed Platform (CMP) |
|
N/A | 2 | 10 | 15.02% | 3.31% | 8.99 GB/h |
Splunk Cloud Classic |
|
6 | 2 | 10 | 99.95% | 25.89% | 10.78 GB/h |
Splunk Cloud Victoria |
|
6 | 2 | 10 | 57.58% | 20.83% | 9.58 GB/h |
Customer Managed Platform (CMP) |
|
N/A | 5 | 10 | 82.09% | 16% | 278 GB/h |
Splunk Cloud Classic |
|
9 | 5 | 10 | 99.93% | 22.96% | 100 GB/h |
Splunk Cloud Victoria |
|
9 | 5 | 10 | 61.59% | 19.63% | 325 GB/h |
Performance reference notes:
- The Amazon Security Lake data input is stateless, so multiple inputs can be configured against the same SQS queue.
- The following configuration settings are to scale:
- Batch Size: The number of threads spawned by a single input. For
example,
n=10
will process 10 messages in parallel). - Number of Amazon Security Lake inputs.
- Batch Size: The number of threads spawned by a single input. For
example,
- If you have horizontally scaled the SQS-based S3 input by configuring multiple inputs with the same SQS queue, and your file size in the S3 bucket is not consistent, then the best practice is to decrease batch size (minimum up to 1), as batches are processed sequentially.
Transit Gateway Flow Logs¶
The following tables provide general guidance on sizing, performance, and cost considerations for the Transit Gateway Flow Logs data input. Files with 1MB size were used to collect the performance stats. The batch size for all the inputs was 10.
Common architecture setup | Number of indexers | Number of inputs | Heavy forwarder/IDM CPU % | Heavy forwarder/IDM RAM % | Expected Average Throughput Indexed |
---|---|---|---|---|---|
Customer Managed Platform (CMP)
|
N/A | 1 | 38.40% | 7.26% | 26578 KB/m |
N/A | 5 | 50.75% | 6.98% | 40116 KB/m | |
Splunk platform environment - Victoria Search Head Cluster
|
3 | 1 | 24.34% | 9.20% | 40483 KB/m |
9 | 5 | 41.12% | 18.05% | 61498 KB/m | |
Splunk platform environment - Classic Cluster (1 IDM)
|
3 | 1 | 22.37% | 7.52% | 45048 KB/m |
9 | 5 | 29.05% | 20.40% | 53792 KB/m |
SQS¶
The following table provides general guidance on sizing, performance, and cost considerations for the SQS data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using SQS incurs charges from AWS. See https://aws.amazon.com/sqs/pricing/. |
SNS¶
The following table provides general guidance on sizing, performance, and cost considerations for the SNS data input:
Consideration | Notes |
---|---|
Sizing and performance | None. |
AWS cost | Using SNS incurs charges from AWS. See https://aws.amazon.com/sns/pricing/. |