Auto-Instrumentation
The first part of our workshop will demonstrate how auto-instrumentation with OpenTelemetry allows the OpenTelemetry Collector to auto-detect what language your function is written in, and start capturing traces for those functions.
The Auto-Instrumentation Workshop Directory & Contents
First, let us take a look at the o11y-lambda-workshop/auto
directory, and some of its files. This is where all the content for the auto-instrumentation portion of our workshop resides.
The auto
Directory
Run the following command to get into the o11y-lambda-workshop/auto directory:
cd ~/o11y-lambda-workshop/auto
Inspect the contents of this directory:
ls
The output should include the following files and directories:
handler outputs.tf terraform.tf variables.tf main.tf send_message.py terraform.tfvars
The output should include the following files and directories:
get_logs.py main.tf send_message.py handler outputs.tf terraform.tf
The main.tf
file
- Take a closer look at the
main.tf
file:cat main.tf
- Can you identify which AWS resources are being created by this template?
- Can you identify where OpenTelemetry instrumentation is being set up?
- Hint: study the lambda function definitions
- Can you determine which instrumentation information is being provided by the environment variables we set earlier?
You should see a section where the environment variables for each lambda function are being set.
environment {
variables = {
SPLUNK_ACCESS_TOKEN = var.o11y_access_token
SPLUNK_REALM = var.o11y_realm
OTEL_SERVICE_NAME = "producer-lambda"
OTEL_RESOURCE_ATTRIBUTES = "deployment.environment=${var.prefix}-lambda-shop"
AWS_LAMBDA_EXEC_WRAPPER = "/opt/nodejs-otel-handler"
KINESIS_STREAM = aws_kinesis_stream.lambda_streamer.name
}
}
By using these environment variables, we are configuring our auto-instrumentation in a few ways:
We are setting environment variables to inform the OpenTelemetry collector of which Splunk Observability Cloud organization we would like to have our data exported to.
SPLUNK_ACCESS_TOKEN = var.o11y_access_token SPLUNK_ACCESS_TOKEN = var.o11y_realm
We are also setting variables that help OpenTelemetry identify our function/service, as well as the environment/application it is a part of.
OTEL_SERVICE_NAME = "producer-lambda" # consumer-lambda in the case of the consumer function OTEL_RESOURCE_ATTRIBUTES = "deployment.environment=${var.prefix}-lambda-shop"
We are setting an environment variable that lets OpenTelemetry know what wrappers it needs to apply to our function’s handler so as to capture trace data automatically, based on our code language.
AWS_LAMBDA_EXEC_WRAPPER - "/opt/nodejs-otel-handler"
In the case of the
producer-lambda
function, we are setting an environment variable to let the function know what Kinesis Stream to put our record to.KINESIS_STREAM = aws_kinesis_stream.lambda_streamer.name
These values are sourced from the environment variables we set in the Prerequisites section, as well as resources that will be deployed as a part of this Terraform configuration file.
You should also see an argument for setting the Splunk OpenTelemetry Lambda layer on each function
layers = var.otel_lambda_layer
The OpenTelemetry Lambda layer is a package that contains the libraries and dependencies necessary to collector, process and export telemetry data for Lambda functions at the moment of invocation.
While there is a general OTel Lambda layer that has all the libraries and dependencies for all OpenTelemetry-supported languages, there are also language-specific Lambda layers, to help make your function even more lightweight.
- You can see the relevant Splunk OpenTelemetry Lambda layer ARNs (Amazon Resource Name) and latest versions for each AWS region HERE
The producer.mjs
file
Next, let’s take a look at the producer-lambda
function code:
- Run the following command to view the contents of the
producer.mjs
file:cat ~/o11y-lambda-workshop/auto/handler/producer.mjs
- This NodeJS module contains the code for the producer function.
- Essentially, this function receives a message, and puts that message as a record to the targeted Kinesis Stream
Deploying the Lambda Functions & Generating Trace Data
Now that we are familiar with the contents of our auto
directory, we can deploy the resources for our workshop, and generate some trace data from our Lambda functions.
Initialize Terraform in the auto
directory
In order to deploy the resources defined in the main.tf
file, you first need to make sure that Terraform is initialized in the same folder as that file.
Ensure you are in the
auto
directory:pwd
- The expected output would be ~/o11y-lambda-workshop/auto
If you are not in the
auto
directory, run the following command:cd ~/o11y-lambda-workshop/auto
Run the following command to initialize Terraform in this directory
terraform init
- This command will create a number of elements in the same folder:
.terraform.lock.hcl
file: to record the providers it will use to provide resources.terraform
directory: to store the provider configurations
- In addition to the above files, when terraform is run using the
apply
subcommand, theterraform.tfstate
file will be created to track the state of your deployed resources. - These enable Terraform to manage the creation, state and destruction of resources, as defined within the
main.tf
file of theauto
directory
- This command will create a number of elements in the same folder:
Deploy the Lambda functions and other AWS resources
Once we’ve initialized Terraform in this directory, we can go ahead and deploy our resources.
First, run the terraform plan command to ensure that Terraform will be able to create your resources without encountering any issues.
terraform plan
- This will result in a plan to deploy resources and output some data, which you can review to ensure everything will work as intended.
- Do note that a number of the values shown in the plan will be known post-creation, or are masked for security purposes.
Next, run the terraform apply command to deploy the Lambda functions and other supporting resources from the main.tf file:
terraform apply
Respond yes when you see the Enter a value: prompt
This will result in the following outputs:
Outputs: base_url = "https://______.amazonaws.com/serverless_stage/producer" consumer_function_name = "_____-consumer" consumer_log_group_arn = "arn:aws:logs:us-east-1:############:log-group:/aws/lambda/______-consumer" consumer_log_group_name = "/aws/lambda/______-consumer" environment = "______-lambda-shop" lambda_bucket_name = "lambda-shop-______-______" producer_function_name = "______-producer" producer_log_group_arn = "arn:aws:logs:us-east-1:############:log-group:/aws/lambda/______-producer" producer_log_group_name = "/aws/lambda/______-producer"
- Terraform outputs are defined in the outputs.tf file.
- These outputs will be used programmatically in other parts of our workshop, as well.
Send some traffic to the producer-lambda
URL (base_url
)
To start getting some traces from our deployed Lambda functions, we would need to generate some traffic. We will send a message to our producer-lambda
function’s endpoint, which should be put as a record into our Kinesis Stream, and then pulled from the Stream by the consumer-lambda
function.
Ensure you are in the
auto
directory:pwd
- The expected output would be ~/o11y-lambda-workshop/auto
If you are not in the
auto
directory, run the following commandcd ~/o11y-lambda-workshop/auto
The send_message.py
script is a Python script that will take input at the command line, add it to a JSON dictionary, and send it to your producer-lambda
function’s endpoint repeatedly, as part of a while loop.
Run the
send_message.py
script as a background process- It requires the
--name
and--superpower
arguments
nohup ./send_message.py --name CHANGEME --superpower CHANGEME &
- You should see an output similar to the following if your message is successful
[1] 79829 user@host manual % appending output to nohup.out
- The two most import bits of information here are:
- The process ID on the first line (
79829
in the case of my example), and - The
appending output to nohup.out
message
- The process ID on the first line (
- The
nohup
command ensures the script will not hang up when sent to the background. It also captures the curl output from our command in a nohup.out file in the same folder as the one you’re currently in. - The
&
tells our shell process to run this process in the background, thus freeing our shell to run other commands.
- The two most import bits of information here are:
- It requires the
Next, check the contents of the
response.logs
file, to ensure your output confirms your requests to yourproducer-lambda
endpoint are successful:cat response.logs
- You should see the following output among the lines printed to your screen if your message is successful:
{"message": "Message placed in the Event Stream: {prefix}-lambda_stream"}
- If unsuccessful, you will see:
{"message": "Internal server error"}
If this occurs, ask one of the workshop facilitators for assistance.
View the Lambda Function Logs
Next, let’s take a look at the logs for our Lambda functions.
To view your producer-lambda logs, check the producer.logs file:
cat producer.logs
To view your consumer-lambda logs, check the consumer.logs file:
cat consumer.logs
Examine the logs carefully.
- Do you see OpenTelemetry being loaded? Look out for the lines with
splunk-extension-wrapper
- Consider running
head -n 50 producer.logs
orhead -n 50 consumer.logs
to see the splunk-extension-wrapper being loaded.
- Consider running