Table of Contents
Overview ↵
Genesys Cloud Terraform Module for Splunk¶
Terraform module for Genesys Cloud to Splunk via AWS EventBridge.
This module allows users to extend the data collection provided by the Genesys Cloud Add-on for Splunk through the ingestion of additional events.
See Requirements and Use cases for more information and getting started with the integration.
Requirements¶
- Terraform
-
A Genesys Cloud organization accessible via OAuth Client (Client Credentials) with master admin role
-
An AWS account with permissions to access the following services:
- AWS Identity and Access Management (IAM)
- AWS S3 (for backup purposes)
- AWS Kinesis
- AWS EventBridge
-
Splunk instance with either
-
the Splunk Add-on for Amazon Web Services (AWS) installed
-
the Splunk HTTP Event Collector (HEC) configured
-
Use Cases Overview¶
This module supports three different types of deployment:
Each scenario has pros and cons, please decide the deployment depending on your requirements.
Integrate Genesys Cloud with Splunk HEC¶
Excluding AWS from the solution is (apparently) viable but consider that:
-
Splunk and Genesys Cloud instances should be in close proximity i.e. running in the same continent. Otherwise you may experience latency, retransmissions, higher package loss
AWS EventBridge retransmits events in case of failures..
-
Splunk must be adequately sized to handle events spikes that could increase the load on the instance
Genesys Cloud Topics¶
By default this module configures Genesys Cloud to monitor the following list of topics:
Topic | Description |
---|---|
v2.operations.events.{id} |
Operational events from various services |
v2.users.{id}.persistentconnection |
Notifications for changes in a user’s persistent connection state. |
v2.analytics.conversation.{id}.metrics |
Topic for analytics conversation metrics |
v2.users.{id}.activity |
Notification for changes to routing, presence, out of office, and active queues for a queue member |
v2.users.{id}.geolocation |
Notifications for changes in a user’s persistent connection state. |
v2.users.{id}.routingStatus |
Raised when the routing status of a user is changed. |
v2.conversations.{id}.transcription |
Notification containing conversation transcripts. |
v2.detail.events.conversation.{id}.acd.end |
Topic for analytics detail event type: AcdEndEvent . |
v2.detail.events.conversation.{id}.acd.start |
Topic for analytics detail event type: AcdStartEvent . |
v2.detail.events.conversation.{id}.acw |
Topic for analytics detail event type: AfterCallWorkEvent containing agent wrapups. |
v2.detail.events.conversation.{id}.attributes |
Topic for analytics detail event type: AttributeUpdateEvent . |
v2.detail.events.conversation.{id}.contact |
Topic for analytics detail event type: ContactUpdateEvent . |
v2.detail.events.conversation.{id}.customer.end |
Topic for analytics detail event type: CustomerEndEvent . |
v2.detail.events.conversation.{id}.customer.start |
Topic for analytics detail event type: CustomerStartEvent . |
v2.detail.events.conversation.{id}.flow.end |
Topic for analytics detail event type: FlowEndEvent . |
v2.detail.events.conversation.{id}.flow.outcome |
Topic for analytics detail event type: FlowOutcomeEvent . |
v2.detail.events.conversation.{id}.flow.start |
Topic for analytics detail event type: FlowStartEvent . |
v2.detail.events.conversation.{id}.outbound |
Topic for analytics detail event type: OutboundInitEvent . |
v2.detail.events.conversation.{id}.user.end |
Topic for analytics detail event type: UserEndEvent . |
v2.detail.events.conversation.{id}.user.start |
Topic for analytics detail event type: UserStartEvent . |
v2.detail.events.conversation.{id}.voicemail.end |
Topic for analytics detail event type: VoicemailEndEvent . |
v2.detail.events.conversation.{id}.voicemail.start |
Topic for analytics detail event type: VoicemailStartEvent . |
v2.detail.events.conversation.{id}.wrapup |
Topic for analytics detail event type: WrapupEvent containing system wrapups. |
A full list of available topics can be found here. Please select the Event Bridge
checkbox to filter the supported topics.
Advanced Configuration¶
To add / remove topics from the configuration, please edit the variable topic_filters
in variables.tf. This configuration will be applied to all use case scenarios.
To configure only a specific use case, edit the examples/<your_use_case>/main.tf
as shown below.
module "main" {
source = "../.."
aws_account_region = "eu-west-2"
<!-- Update the Genesys Cloud topics to monitor -->
topic_filters = [
"v2.routing.queues.{id}.users"
]
}
Please note these values will override their own definition in
variables.tf
.
Ended: Overview
GettingStarted ↵
Configure and Deploy via Kinesis¶
Set the Environment Variables¶
Clone the repository and execute:
$~ cd examples/kinesis
$~ cp .env-example .env
# Add your values to the required variables
$~ vi .env
# Load the environment variables
$~ source .env
Configure the Integration¶
Edit examples/kinesis/main.tf as explained below
module "main" {
source = "../.."
<!-- Set the AWS Account Region field to match youre region -->
aws_account_region = "eu-west-2"
}
Usage¶
# Initialize the terraform providers
$~ terraform init
# Preview the changes that terraform plans to make to your infrastructure
$~ terraform plan
# Deploy (--auto-approve to avoid being prompted to confirm the changes)
$~ terraform apply
Configure and Deploy via S3¶
Set the Environment Variables¶
Clone the repository and execute:
$~ cd examples/s3
$~ cp .env-example .env
# Add your values to the required variables
$~ vi .env
# Load the environment variables
$~ source .env
Configure the Integration¶
Edit examples/s3/main.tf as explained below
module "main" {
source = "../.."
<!-- Set the AWS Account Region field to match youre region -->
aws_account_region = "eu-west-2"
<!-- Set S3 bucket name. Must be unique within the specified AWS region -->
s3_bucket_name = "test-bucket"
}
Usage¶
# Initialize the terraform providers
$~ terraform init
# Preview the changes that terraform plans to make to your infrastructure
$~ terraform plan
# Deploy (--auto-approve to avoid being prompted to confirm the changes)
$~ terraform apply
Configure and Deploy via HEC¶
To avoid losing data, the Splunk HTTP Event Collector (HEC) shall be already configured and ready to receive data.
Set the Environment Variables¶
Clone the repository and execute:
$~ cd examples/hec
$~ cp .env-example .env
# Add your values to the required variables
$~ vi .env
# Load the environment variables
$~ source .env
Configure the Deployment¶
Edit examples/hec/main.tf as explained below
module "main" {
source = "../.."
<!-- Set the AWS Account Region field to match your region -->
aws_account_region = "eu-west-2"
<!-- Set S3 bucket name. Must be unique within the specified AWS region. -->
s3_bucket_name = "test-bucket"
<!-- Set your Splunk URL -->
splunk_url = "https://http-inputs-mydomain.splunkcloud.com:443"
<!-- Set your Splunk HEC -->
splunk_hec = "B5A79AAD-D822-46CC-80D1-819F80D7BFB0"
}
Usage¶
# Initialize the terraform providers
$~ terraform init
# Preview the changes that terraform plans to make to your infrastructure
$~ terraform plan
# Deploy (--auto-approve to avoid being prompted to confirm the changes)
$~ terraform apply
Ended: GettingStarted
Troubleshoot ↵
Troubleshooting¶
409
error (BucketAlreadyExists
)-
the name of the bucket terraform is trying to create is not unique within your AWS region
set a different s3_bucket_name in
variables.tf
-
The script is stuck or you get a
403
- there is a permission issue
make sure you have set the environment variables by running
source .env
and that you have the required permissions to create the required resources in AWS
- there is a permission issue