Skip to content

Configure SQS-based S3 inputs for the Splunk Add-on for AWS

Complete the steps to configure SQS-based S3 inputs for the Splunk Add-on for Amazon Web Services (AWS):

  1. You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
  2. Configure AWS services for the SQS-based S3 input.
  3. Configure AWS permissions for the SQS-based S3 input.
  4. (Optional) Configure VPC Interface Endpoints for STS, SQS, and S3 services from your AWS Console if you want to use private endpoints for data collection and authentication. For more information, see the Interface VPC endpoints (AWS PrivateLink) topic in the Amazon Virtual Private Cloud documentation.
  5. Configure SQS-based S3 inputs either through Splunk Web or configuration files.

Configuration prerequisites

Delimited Files parsing prerequisites if parse_csv_with_header is enabled

  • The SQS-based S3 custom data types input processes Delimited Files (.csv, .psv, .tsv) according to the status of the fields parse_csv_with_header and parse_csv_with_delimiter.
    • When parse_csv_with_header is enabled, all files ingested by the input, whether delimited or not, will be processed as if they were delimited files with the value of parse_csv_with_delimiter used to split the fields. The first line of each file will be considered the header.
    • When parse_csv_with_header is disabled, events will be indexed line by line without any CSV processing.
  • The field parse_csv_with_delimiter will be a comma by default, but can be edited to a different delimiter. This delimiter can be any character except alphanumeric, single, or double quote.
  • This data input supports the following compression types:
    • Single delimited files file OR delimited files files in ZIP, GZIP, TAR, or TAR.GZ formats.
  • Ensure that each delimited file contains a header. The CSV parsing functionality will take the first non-empty line of the file as a header before parsing.
  • Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed.
  • Ensure there are no duplicate values in the header of the CSV file(s) to avoid missing data.
  • There are some illegal sequences of string characters that will throw an UnicodeDecodeError.
    • For example, VI,Visa,Cabela�s

Starting in version 6.3.0 of the Splunk Add-on for AWS, the VPC Flow log extraction format has been updated to include v3-v5 fields. Before upgrading to versions 6.3.0 and higher of the Splunk Add-on for AWS, Splunk platform deployments ingesting AWS VPC Flow Logs must update the log format in AWS VPC to include v3-v5 fields in order to ensure successful field extractions. For more information on updating the log format in AWS VPC, see the Create a flow log section of the Work with flow logs topic in the AWS documentation. For more information on the list of v1-v5 fields to add in the given order when selecting Custom Format, or selecting Custom Format and Select All, see the Available fields section of the Logging IP traffic using VPC Flow Logs topic in the AWS documentation.

Processing outcomes

  • End result after CSV parsing will be a JSON object with the header values mapped to the subsequent row values.

Configure AWS services for the SQS-based S3 input

Configure SQS-based S3 inputs to collect events

Configure SQS-based S3 inputs to collect the following events:

  • CloudFront Access Logs
  • Config
  • ELB Access logs
  • CloudTrail
  • S3 Access Logs
  • VPC Flow Logs
  • Transit Gateway Flow Logs
  • Custom data types

AWS service configuration prerequisites

Before you configure SQS-based S3 inputs, perform the following tasks:

  • Create an SQS Queue to receive notifications and a second SQS Queue to serve as a dead letter queue.
  • Create an SNS Topic.
  • Configure S3 to send notifications for All object create events to an SNS Topic. This lets S3 notify the add-on that new events were written to the S3 bucket.
  • Subscribe the main SQS Queue to the corresponding SNS Topic.

Best practices

Keep the following in mind as you configure your inputs:

  • The SQS-based S3 input only collects in AWS service logs that meet the following criteria:
    • Near-real time
    • Newly created
    • Stored into S3 buckets
    • Has event notifications sent to SQS

Events that occurred in the past, or events with no notifications sent through SNS to SQS end up in the Dead Letter Queue (DLQ), and no corresponding event is created by the Splunk Add-on for AWS. To collect historical logs stored into S3 buckets, use the generic S3 input instead. The S3 input lets you set the initial scan time parameter to collect data generated after a specified time in the past.

  • To collect the same types of logs from multiple S3 buckets, even across regions, set up one input to collect data from all the buckets. To do this, configure these buckets to send notifications to the same SQS queue from which the SQS-based S3 input polls message.
  • To achieve high throughput data ingestion from an S3 bucket, configure multiple SQS-based S3 inputs for the S3 bucket to scale out data collection.
  • After configuring an SQS-based S3 input, you might need to wait for a few minutes before new events are ingested and can be searched. Also, a more verbose logging level causes longer data digestion time. Debug mode is extremely verbose and is not recommended on production systems.
  • The SQS-based input allows you to ingest data from S3 buckets by optimizing the API calls made by the add-on and relying on SQS/SNS to collect events upon receipt of notification.
  • The SQS-based S3 input is stateless, which means that when multiple inputs are collecting data from the same bucket, if one input goes down, the other inputs continue to collect data and take over the load from the failed input. This lets you enhance fault tolerance by configuring multiple inputs to collect data from the same bucket.
  • The SQS-based S3 input supports signature validation. If S3 notifications are set up to send through SNS, AWS will create a signature for every message. The SQS-based S3 input will validate each message with the associated certificate, provided by AWS. For more information, see the Verifying the signatures of Amazon SNS messages topic in the AWS documentation.
  • If any messages with a signature are received, all following messages will require valid SNS signatures, no matter your input’s SNS signature setting.
  • Set up a Dead Letter Queue for the SQS queue to be used for the input for storing invalid messages. For information about SQS Dead Letter Queues and how to configure it, see the Amazon SQS dead-letter queues topic in the AWS documentation.
  • Configure the SQS visibility timeout to prevent multiple inputs from receiving and processing messages in a queue more than once. Set your SQS visibility timeout to 5 minutes or longer. If the visibility timeout for a message is reached before the message is fully processed by the SQS-based S3 input, the message reappears in the queue and is retrieved and processed again, resulting in duplicate data.

For information about SQS visibility timeout and how to configure it, see the Amazon SQS visibility timeout topic in the AWS documentation.

Supported message types for the SQS-based S3 input

The following message types are supported by the SQS-based S3 input

  • ConfigurationHistoryDeliveryCompleted
  • ConfigurationSnapshotDeliveryCompleted

Configure AWS permissions for the SQS-based S3 input

Configure AWS permissions for SQS access

The following permissions are required for SQS access:

  • GetQueueUrl
  • ReceiveMessage
  • SendMessage
  • DeleteMessage
  • ChangeMessageVisibility
  • GetQueueAttributes
  • ListQueues

Required permissions for S3 buckets and objects:

  • GetObject (if Bucket Versioning is disabled).
  • GetObjectVersion (if Bucket Versioning is enabled).

Required permissions for KMS:

  • Decrypt

See the following sample inline policy to configure input permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Effect": "Allow",
        "Action": [
            "sqs:GetQueueUrl",
            "sqs:ReceiveMessage",
            "sqs:SendMessage",
            "sqs:DeleteMessage",
            "sqs:ChangeMessageVisibility",
            "sqs:GetQueueAttributes",
            "sqs:ListQueues",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "kms:Decrypt"
        ],
        "Resource": "*"
        }
    ]
}

For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.

Configure SNS policy to receive notifications from S3 buckets

See the following sample inline SNS policy to allow your S3 bucket to send notifications to an SNS topic.

    {
        "Version": "2008-10-17",
        "Id": "example-ID",
        "Statement": [
                    {
                                     "Sid": "example-statement-ID",
                     "Effect": "Allow",
                     "Principal": {"AWS":"*" },
                     "Action": ["SNS:Publish"],
                     "Resource": "<SNS-topic-ARN>",
                     "Condition": {"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:<bucket-name>" }}
                    }
                                  ]
    }

For more information and sample policies, see http://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html.

Configure AWS services for SNS alerts

If you plan to use the SQS-based S3 input, you must enable Amazon S3 bucket events to send notification messages to an SQS queue whenever the events occur. This queue cannot be first-in-first-out. For instructions on setting up S3 bucket event notifications, see the AWS documentation: https://docs.aws.amazon.com/AmazonS3/latest/UG/SettingBucketNotifications.html http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

Configure an SQS-based S3 input using Splunk Web

To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:

  • Create New Input > CloudTrail > SQS-based S3
  • Create New Input > CloudFront Access Log > SQS-based S3
  • Create New Input > Config > SQS-based S3
  • Create New Input > ELB Access Logs > SQS-based S3
  • Create New Input > S3 Access Logs > SQS-based S3
  • Create New Input > VPC Flow Logs > SQS-based S3
  • Create New Input > Transit Gateway Flow Logs > SQS-based S3
  • Create New Input > Custom Data Type > SQS-based S3
  • Create New Input > Custom Data Type > SQS-based S3 > Delimited Files S3 File Decoder

You must have the admin_all_objects role enabled in order to add new inputs.

Choose the menu path that corresponds to the data type you want to collect. The system automatically sets the source type and display relevant field settings in the subsequent configuration page.

Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:

Argument in configuration file

Field in Splunk Web

Description

aws_account

AWS Account

The AWS account or EC2 IAM role the Splunk platform uses to access the keys in your S3 buckets. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role.
If the region of the AWS account you select is GovCloud, you may encounter errors such as"Failed to load options for S3 Bucket". You need to manually add AWS GovCloud Endpoint in the S3 Host Name field. See http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-govcloud-endpoints.html for more information.

aws_iam_role

Assume Role

The IAM role to assume, see Manage accounts for the Splunk Add-on for AWS.

using_dlq

Force using DLQ (Recommended)

Check the checkbox to remove the checking of DLQ (Dead Letter Queue) for ingestion of specific data. In inputs.conf, enter 0 or 1 to respectively disable or enable the checking. (Default value is 1)

sqs_queue_region

AWS Region

AWS region that the SQS queue is in.

private_endpoint_enabled

Use Private Endpoints

Check the checkbox to use private endpoints of AWS Security Token Service (STS) and AWS Simple Cloud Storage (S3) services for authentication and data collection. In inputs.conf, enter 0 or 1 to respectively disable or enable use of private endpoints.

sqs_private_endpoint_url

Private Endpoint (SQS)

Private Endpoint (Interface VPC Endpoint) of your SQS service, which can be configured from your AWS console.
Supported Formats :
://vpce--.sqs..vpce.amazonaws.com ://.vpce---.sqs..vpce.amazonaws.com

sqs_sns_validation

SNS Signature Validation

SNS validation of your SQS messages, which can be configured from your AWS console. If selected, all messages will be validated. If unselected, then messages will not be validated until receiving a signed message. Thereafter, all messages will be validated for an SNS signature. For new SQS-based S3 inputs, this feature is enabled, by default.
Supported Formats :
1 is enabled, 0 is disabled. Default is 1.

parse_firehose_error_data

Parse Firehose Error Data

Parse raw data(All events) or failed Kinesis Firehose stream error data to the Splunk HTTP Event Collector (HEC). Decoding of error data will be done for failed Kinesis Firehose streams. For new SQS-based S3 inputs, this feature is disabled, by default.

Versions 7.4.0 and higher of this add-on support the collection of data in the default uncompressed text format.


Supported Formats :
1 is enabled, 0 is disabled. Default is 0.

s3_private_endpoint_url

Private Endpoint (S3)

Private Endpoint (Interface VPC Endpoint) of your S3 service, which can be configured from your AWS console.
Supported Formats :
://bucket.vpce--.s3..vpce.amazonaws.com ://bucket.vpce---.s3..vpce.amazonaws.com

sts_private_endpoint_url

Private Endpoint (STS)

Private Endpoint (Interface VPC Endpoint) of your STS service, which can be configured from your AWS console.
Supported Formats :
://vpce--.sts..vpce.amazonaws.com ://vpce---.sts..vpce.amazonaws.com

sqs_queue_url

SQS Queue Name

The SQS queue URL.

sqs_batch_size

SQS Batch Size

The maximum number of messages to pull from the SQS queue in one batch. Enter an integer between 1 and 10 inclusive. Set a larger value for small files, and a smaller value for large files. The default SQS batch size is 10. If you are dealing with large files and your system memory is limited, set this to a smaller value.

s3_file_decoder

S3 File Decoder

The decoder to use to parse the corresponding log files. The decoder is set according to the Data Type you select. If you select a Custom Data Type, choose one from Cloudtrail, Config, ELB Access Logs, S3 Access Logs, CloudFront Access Logs, or Amazon Security Lake.

sourcetype

Source Type

The source type for the events to collect, automatically filled in based on the decoder chosen for the input.

This add-on does not support custom sourcetypes for Cloudtrail, Config, ELB Access Logs, S3 Access Logs, and CloudFront Access Logs.

interval

Interval

The length of time in seconds between two data collection runs. The default is 300 seconds.

index

Index

The index name where the Splunk platform puts the SQS-based S3 data. The default is main.

polling_interval

Polling Interval

The number of seconds to wait before the Splunk platform runs the command again. The default is 1,800 seconds.

parse_csv_with_header

Parse all files as CSV

If selected, all files will be parsed as a delimited file with the first line of each file considered the header. Set this checkbox to disabled for delimited files without a header. For new SQS-based S3 inputs, this feature is disabled, by default.

Supported Formats:

  • 1 is enabled.
  • 0 is disabled and default.

parse_csv_with_delimiter

CSV field delimiter

Delimiter must be one character. The character cannot be alphanumeric, single quote, or double quote. Tab-delimited files will be \t. By default the delimiter is a comma.

sns_max_age

SNS message max age

Maximum age of the SNS message, in hours. SNS message max age must be between 1 to 336 hours (14 days). The default value is 96 hours (4 days). Messages having an age within the specified max age will be ingested.

Configure an SQS-based S3 input using configuration files

When you configure inputs manually in inputs.conf, create a stanza using the following template and add it to $SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf. If the file or path does not exist, create it.

    [aws_sqs_based_s3://<stanza_name>]
    aws_account = <value>
    using_dlq = <value>
    private_endpoint_enabled = <value>
    sqs_private_endpoint_url = <value>
    s3_private_endpoint_url = <value>
    sts_private_endpoint_url = <value>
    parse_firehose_error_data = <value>
    interval = <value>
    s3_file_decoder = <value>
    sourcetype = <value>
    sqs_batch_size = <value>
    sqs_queue_region = <value>
    sqs_queue_url = <value>
    sns_max_age = <value>

Some of these settings have default values that can be found in $SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf:

    [aws_sqs_based_s3]
    using_dlq = 1

The previous values correspond to the default values in Splunk Web, as well as some internal values that are not exposed in Splunk Web for configuration. If you copy this stanza to your $SPLUNK_HOME/etc/apps/Splunk_TA_aws//local and use it as a starting point to configure your inputs.conf manually, change the [aws_sqs_based_s3]stanza title from aws_sqs_based_s3 to aws_sqs_based_s3://<name> and add the additional parameters that you need for your deployment.

Valid values for s3_file_decoder are CustomLogs, CloudTrail, ELBAccessLogs, CloudFrontAccessLogs, S3AccessLogs, Config, DelimitedFilesDecoder, TransitGatewayFlowLogs.

If you want to ingest custom logs other than the natively supported AWS log types, you must set s3_file_decoder = CustomLogs. This setting lets you ingest custom logs into the Splunk platform instance, but it does not parse the data. To process custom logs into meaningful events, you need to perform additional configurations in props.conf and transforms.conf to parse the collected data to meet your specific requirements.

For more information on these settings, see /README/inputs.conf.spec under your add-on directory.

Configure an SQS based S3 input for CrowdStrike Falcon Data Replicator (FDR) events using Splunk Web

To configure an SQS based S3 input for CrowdStrike Falcon Data Replicator (FDR) events, perform the following steps:

  1. On the Inputs page, select “Create New Input” > “Custom Data Type” > “SQS-Based S3”.
  2. Select your AWS Account the account from the dropdown list.
  3. Uncheck the check box Force Using DLQ (Recommended).
  4. Select the region in which the SQS Queue is present from the AWS Region dropdown.
  5. In the SQS Queue Name box, enter the full SQS queue URL. This will create a option for the SQS queue URL in the dropdown menu.
  6. Select the newly created SQS queue URL option from the SQS Queue Name dropdown menu.
  7. Use the table in the Configure an SQS-based S3 input using Splunk Web section of this topic to add any additional configuration file arguments.
  8. Save your changes.

Migrate from the Generic S3 input to the SQS-based S3 input

SQS-based S3 is the recommended input type for real-time data collection from S3 buckets because it is scalable and provides better ingestion performance than the other S3 input types.

If you are already using a generic S3 input to collect data, use the following steps to switch to the SQS-based S3 input:

  1. Perform prerequisite configurations of AWS services:
  2. Add an SQS-based S3 input using the SQS queue you just configured. After the setup, make sure the new input is enabled and starts collecting data from the bucket.

https://docs.splunk.com/Documentation/AddOns/released/AWS/Inspector

  • Edit your old generic S3 input and set the **End Date/Time** field to the current system time to phase it out.
  • Wait until all the task executions of the old input are complete. As a best practice, wait at least double your polling frequency.
  • Disable the old generic S3 input.
  • Run the following searches to delete any duplicate events collected during the transition: For CloudTrail events: index=xxx sourcetype=aws:cloudtrail \| streamstats count by source, eventID \| search count \> 1 \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.eventID.indexed_time \| table dup_id \| outputcsv dupes.csv index=xxx sourcetype=aws:cloudtrail \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.eventID.indexed_time \| search \[\|inputcsv dupes.csv \| format "(" "" "" "" "OR" ")"\] \| delete For S3 access logs: index=xxx sourcetype=aws:s3:accesslogs \| streamstats count by source, request_id \| search count \> 1 \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.request_id.indexed_time \| table dup_id \| outputcsv dupes.csv index=xxx sourcetype=aws:s3:accesslogs \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.request_id.indexed_time \| search \[\|inputcsv dupes.csv \| format "(" "" "" "" "OR" ")"\] \| delete For CloudFront access logs: index=xxx sourcetype=aws:cloudfront:accesslogs \| streamstats count by source, x_edge_request_id \| search count \> 1 \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.x_edge_request_id.indexed_time \| table dup_id \| outputcsv dupes.csv index=xxx sourcetype=aws:cloudfront:accesslogs \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.x_edge_request_id.indexed_time \| search \[\|inputcsv dupes.csv \| format "(" "" "" "" "OR" ")"\] \| delete For classic load balancer (elb) access logs: Because events do not have unique IDs, use the hash function to remove duplication. index=xxx sourcetype=aws:elb:accesslogs \| eval hash=sha256(\_raw) \| streamstats count by source, hash \| search count \> 1 \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.hash.indexed_time \| table dup_id \| outputcsv dupes.csv index=xxx sourcetype=aws:elb:accesslogs \| eval hash=sha256(\_raw) \| eval indexed_time=strftime(\_indextime, "%+") \| eval dup_id=source.hash.indexed_time \| search \[\|inputcsv dupes.csv \| format "(" "" "" "" "OR" ")"\] \| delete
  • Optionally, delete the old generic S3 input.
  • Automatically scale data collection with SQS-based S3 inputs

    With the SQS-based S3 input type, you can take full advantage of the auto-scaling capability of the AWS infrastructure to scale out data collection by configuring multiple inputs to ingest logs from the same S3 bucket without creating duplicate events. This is particularly useful if you are ingesting logs from a very large S3 bucket and hit a bottleneck in your data collection inputs.

    1. Create an AWS auto scaling group for your heavy forwarder instances where the SQS-based S3 inputs is running. To create an auto-scaling group, you can either specify a launch configuration or create an AMI to provision new EC2 instances that host heavy forwarders, and use bootstrap script to install the Splunk Add-on for AWS and configure SQS-based S3 inputs. For detailed information about the auto-scaling group and how to create it, see http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html.
    2. Set CloudWatch alarms for one of the following Amazon SQS metrics:

      • ApproximateNumberOfMessagesVisible: The number of messages available for retrieval from the queue.
      • ApproximateAgeOfOldestMessage: The approximate age (in seconds) of the oldest non-deleted message in the queue.

      For instructions on setting CloudWatch alarms for Amazon SQS metrics, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQS_AlarmMetrics.html. 3. Use the CloudWatch alarm as a trigger to provision new heavy forwarder instances with SQS-based S3 inputs configured to consume messages from the same SQS queue to improve ingestion performance.