Splunk4Ninjas AppDynamics

2 minutes  

Introduction

Splunk AppDynamics is a full-stack performance monitoring solution for your critical business applications that offers the following features:

  • Consistent end-to-end application monitoring, regardless of environment, traditional, hybrid, or cloud-native.
  • Accelerated cloud migration and enterprise-grade, end-to-end insights for your applications regardless of where they are deployed.
  • Unified monitoring that enables you to quickly resolve performance issues before they become business problems, with three clicks to root cause.

You can optimize the total cost of ownership by leveraging existing personnel, processes, and training on AppDynamics platform for traditional, cloud, or hybrid deployments.

Screenshot of AppDynamics Dashboard Screenshot of AppDynamics Dashboard

Workshop Overview

In this workshop, we’ll cover the fundamentals of Splunk AppDynamics. We’ll demonstrate how Splunk AppDynamics enables you to monitor the health of your application services, Web Applications, Databases and more. When you have completed this workshop, you will be able to :

  • Download and install the AppDynamics Java APM Agent.
  • Configure collection settings in the Controller.
  • Monitor and troubleshoot your application’s performance health.
  • Monitor alerts in AppDynamic’s monitoring service based off of data captured by AppDynamics.
  • Monitor server health and troubleshoot issues
  • Monitor the health of your browser based application with BRUM
  • Monitor and troubleshoot Database performance
  • Gain deeper visibility into your users with Splunk AppDynamics Business IQ

Additional Work to be Done

  • Add a section covering health rules, how to view existing ones, create new health rule
  • Application Security
Last Modified Nov 4, 2025

Subsections of Splunk4Ninjas AppDynamics

Application Performance Monitoring (APM)

2 minutes  

Objectives

In this Lab you learn how to use AppDynamics to monitor the health of your application services. You will need to complete this lab first before you start the other labs in this Workshop.

When you have completed this lab, you will be able to:

  • Download the AppDynamics Java APM Agent.
  • Install the AppDynamics Java APM Agent.
  • Initialize the sample application with load.
  • Understand the core concepts of AppDynamics APM
  • Configure collection settings in the Controller.
  • Monitor your application’s health.
  • Troubleshoot application performance issues to find root cause.
  • Monitor alerts in AppDynamic’s monitoring service based off of data captured by AppDynamics.

Workshop Environment

The workshop environment has two hosts:

  • The first host runs the AppDynamics Controller and will be referred to from this point on as the Controller.
  • The second host runs the Supercar Trader application used in the labs. It will be the host where you will install the AppDynamics agents and will be referred to from this point on as the Application VM.

Controller

You will be using the AppDynamics SE Lab Controller for this workshop.

Controller Controller

Application VM

Supercar Trader is a Java-based Web Application

The purpose of Supercar-Trader collection is to generate dynamic traffic (Business Transactions) for the AppDynamics Controller.

Application VM Application VM

Last Modified Oct 13, 2025

Subsections of Application Performance Monitoring (APM)

1. Download Java Agent

In this exercise you will access the AppDynamics Controller from a web browser and download the Java APM agent from there.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Configure your Application

  1. Select Overview on the left navigation panel
  2. Click on Getting Started tab
  3. Click on Getting Started Wizard button

Getting Started Wizard Getting Started Wizard

Select the Java Application Type

Java Application Java Application

Download the Java Agent

  1. Select the Sun/JRockit - Legacy for the JVM type
  2. Accept defaults for the Controller connection.
  3. Under Set Application and Tier, select Create a new Application:
  4. Enter Supercar-Trader-YOURINITIALS as the application name.
  5. Enter Web Portal for the new Tier
  6. Enter Web-Portal_Node-01 for the Node Name
  7. Click Continue
  8. Click Click Here to Download.
Warning

The application name must be unique, make sure to append your initials or add a unique identifier to the application name

Agent Configuration1 Agent Configuration1

Agent Configuration2 Agent Configuration2

Your browser should prompt you that the agent is being downloaded to your local file system. Make sure to take note of where the file was downloaded to and the full name of it.

Agent Bundle Agent Bundle

Last Modified Oct 13, 2025

2. Install the Java Agent

In this exercise you will perform the following actions:

  • Upload the Java agent file to your EC2 instance
  • Unzip the file into a specific directory
  • Update the Java agents XML configuration file (optional)
  • Modify the Apache Tomcat startup script to add the Java agent

Upload Java Agent to Application VM

By this point you should have received the information regarding the EC2 instance that you will be using for this workshop. Ensure you have the IP address of your EC2 instance, username and password required to ssh into the instance .

On your local machine, open a terminal window and change into the directory where the java agent file was downloaded to. Upload the file into the EC2 instance using the following command. This may take some time to complete.

  • Update the IP address or public DNS for your instance.
  • Update the filename to match your exact version.
cd ~/Downloads
scp -P 2222 AppServerAgent-22.4.0.33722.zip splunk@i-0b6e3c9790292be66.splunk.show:/home/splunk
(splunk@44.247.206.254) Password:
AppServerAgent-22.4.0.33722.zip                                                                    100%   22MB 255.5KB/s   01:26

Unzip the Java Agent

SSH into your EC2 instance using the instance and password assigned to you by the instructor.

ssh -P 2222 splunk@i-0b6e3c9790292be66.splunk.show

Unzip the java agent bundle into a new directory.

cd /opt/appdynamics
mkdir javaagent
cp /home/splunk/AppServerAgent-*.zip /opt/appdynamics/javaagent
cd /opt/appdynamics/javaagent
unzip AppServerAgent-*.zip
Tip

We pre-configured the Java agent using the Controller’s Getting Started Wizard. If you download the agent from the AppDynamics Portal, you will need to manually update the Java agent’s XML configuration file.

There are three primary ways to set the configuration properties of the Java agent. These take precedence in the following order:

  1. System environment variables.
  2. JVM properties passed on the command line.
  3. Properties within the controller-info.xml file.

Add the Java Agent to the Tomcat Server

First we want to make sure that the Tomcat server is not running

cd /usr/local/apache/apache-tomcat-9/bin
./shutdown.sh

We will now modify the catalina script to set an environment variable with the java agent.

cd /usr/local/apache/apache-tomcat-9/bin
nano catalina.sh

Add the following line at 125 (after the initial comments) & save the file

export CATALINA_OPTS="$CATALINA_OPTS -javaagent:/opt/appdynamics/javaagent/javaagent.jar"

Restart the server

./startup.sh

Validate that the Tomcat server is running, this can take a few minutes

curl localhost:8080
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/9.0.50</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>

    <body>
        <div id="wrapper"
....
Last Modified Oct 13, 2025

3. Generate Application Load

In this exercise you will perform the following actions:

  • Verify the sample app is running.
  • Start the load generation for the sample application.
  • Confirm the transaction load in the Controller.

Verify that the Sample Application is Running

The sample application home page is accessible through your web browser with a URL in the format seen below. Enter that URL in your browser’s navigation bar, substituting the IP Address of your EC2 instance.

http://[ec2-ip-address]:8080/Supercar-Trader/home.do

You should be able to see the home page of the Supercar Trader application. Supercar Trade Home Page Supercar Trade Home Page

Start the Load Generation

SSH into your ec2 instance and start the load generation. It may take a few minutes for all the scripts to run.

cd /opt/appdynamics/lab-artifacts/phantomjs
./start_load.sh
Cleaning up artifacts from previous load...
Starting home-init-01
Waiting for additional JVMs to initialize... 1
Waiting for additional JVMs to initialize... 2
Waiting for additional JVMs to initialize... 3
Waiting for additional JVMs to initialize... 4
Waiting for additional JVMs to initialize... 5
Waiting for additional JVMs to initialize... 6
Waiting for additional JVMs to initialize... 7
Waiting for additional JVMs to initialize... 8
Waiting for additional JVMs to initialize... 9
Waiting for additional JVMs to initialize... 10
Waiting for additional JVMs to initialize... 11
Waiting for additional JVMs to initialize... 12
Waiting for additional JVMs to initialize... 13
Waiting for additional JVMs to initialize... 14
Waiting for additional JVMs to initialize... 15
Waiting for additional JVMs to initialize... 16
Waiting for additional JVMs to initialize... 17
Waiting for additional JVMs to initialize... 18
Waiting for additional JVMs to initialize... 19
Waiting for additional JVMs to initialize... 20
Starting slow-query-01
Starting slow-query-02
Starting slow-query-03
Starting slow-query-04
Starting sessions-01
Starting sessions-02
Starting sell-car-01
Starting sell-car-02
Starting sessions-03
Starting sessions-04
Starting search-01
Starting request-error-01
Starting mem-leak-insurance
Finished starting load generator scripts                                                                100%   22MB 255.5KB/s   01:26

Confirm transaction load in the Controller

If you still have the Getting Started Wizard open in your web browser, you should see that the agent is now connected and that the Controller is receiving data.

Agent Connected Agent Connected

Click Continue and you will be taken to the Application Flow Map (you can jump to the Flow Map image below).

If you previously closed the Controller browser window, log back into the Controller.

  1. From the Overview page (Landing Page). Click on the Applications tab on the left navigation panel.

    Controller Overview Page Controller Overview Page

  2. Within the Applications page you can manually search for your application or you can use the search bar in the top right corner to narrow down your search.

    Applications Search Applications Search

Click in your application’s name, this should bring you into the Application Flow Map, you should see all the application components appear after twelve minutes.

If you don’t see all the application components after twelve minutes, try waiting a few more minutes and refresh your browser tab.

FlowMap FlowMap

During the agent download step we assigned the Tier name and Node name for the Tomcat server.

<tier-name>Web-Portal</tier-name>
<node-name>Web-Portal_Node-01</node-name>

You might be wondering how the other four services had their Tier and Node name assigned. The sample application dynamically creates four additional JVMs from the initial Tomcat JVM and assigns the Tier and Node names by passing those properties into the JVM startup command as -D properties for each of the four services. Any -D properties included on the JVM startup command line will supersede the properties defined in the Java agents controller-info.xml file.

To see the JVM startup parameters used for each of the four services that were dynamically started, issue the following command in your terminal window of your ec2 instance.

ps -ef | grep appdynamics.agent.tierName
splunk     47131   46757  3 15:34 pts/1    00:08:17 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -javaagent:/opt/appdynamics/javaagent/javaagent.jar -Dappdynamics.controller.hostName=se-lab.saas.appdynamics.com -Dappdynamics.controller.port=443 -Dappdynamics.controller.ssl.enabled=true -Dappdynamics.agent.applicationName=Supercar-Trader-AppD-Workshop -Dappdynamics.agent.tierName=Api-Services -Dappdynamics.agent.nodeName=Api-Services_Node-01 -Dappdynamics.agent.accountName=se-lab -Dappdynamics.agent.accountAccessKey=hj6a4d7h2cuq -Xms64m -Xmx512m -XX:MaxPermSize=256m supercars.services.api.ApiService
splunk     47133   46757  2 15:34 pts/1    00:08:11 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -javaagent:/opt/appdynamics/javaagent/javaagent.jar -Dappdynamics.controller.hostName=se-lab.saas.appdynamics.com -Dappdynamics.controller.port=443 -Dappdynamics.controller.ssl.enabled=true -Dappdynamics.agent.applicationName=Supercar-Trader-AppD-Workshop -Dappdynamics.agent.tierName=Inventory-Services -Dappdynamics.agent.nodeName=Inventory-Services_Node-01 -Dappdynamics.agent.accountName=se-lab -Dappdynamics.agent.accountAccessKey=hj6a4d7h2cuq -Xms64m -Xmx512m -XX:MaxPermSize=256m supercars.services.inventory.InventoryService
splunk     47151   46757  1 15:34 pts/1    00:04:58 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -javaagent:/opt/appdynamics/javaagent/javaagent.jar -Dappdynamics.controller.hostName=se-lab.saas.appdynamics.com -Dappdynamics.controller.port=443 -Dappdynamics.controller.ssl.enabled=true -Dappdynamics.agent.applicationName=Supercar-Trader-AppD-Workshop -Dappdynamics.agent.tierName=Insurance-Services -Dappdynamics.agent.nodeName=Insurance-Services_Node-01 -Dappdynamics.agent.accountName=se-lab -Dappdynamics.agent.accountAccessKey=hj6a4d7h2cuq -Xms64m -Xmx68m -XX:MaxPermSize=256m supercars.services.insurance.InsuranceService
splunk     47153   46757  3 15:34 pts/1    00:08:17 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -javaagent:/opt/appdynamics/javaagent/javaagent.jar -Dappdynamics.controller.hostName=se-lab.saas.appdynamics.com -Dappdynamics.controller.port=443 -Dappdynamics.controller.ssl.enabled=true -Dappdynamics.agent.applicationName=Supercar-Trader-AppD-Workshop -Dappdynamics.agent.tierName=Enquiry-Services -Dappdynamics.agent.nodeName=Enquiry-Services_Node-01 -Dappdynamics.agent.accountName=se-lab -Dappdynamics.agent.accountAccessKey=hj6a4d7h2cuq -Xms64m -Xmx512m -XX:MaxPermSize=256m supercars.services.enquiry.EnquiryService
splunk    144789   46722  0 20:09 pts/1    00:00:00 grep --color=auto appdynamics.agent.tierName

Once all of the components appear on the flow map, you should see an HTTP cloud icon that represents the three HTTP backends called by the Insurance-Services Tier.

Ungroup the the three HTTP backends by following these steps.

  1. Right click the HTTP cloud icon labeled 3 HTTP backends
  2. From the drop down menu, select Ungroup Backends

Ungroup Http Ungroup Http

Once the HTTP backends have been ungrouped, you should see all three HTTP backends as shown in the following image.

Ungroup flow Ungroup flow

Last Modified Oct 13, 2025

4. AppDynamics Core Concepts

In this section you will learn about the core concepts of Splunk Appdynamics APM features. By the end of the section you will understand the following concepts:

  • Application Flow Maps
  • Business Transactions (BTs)
  • Snapshots
  • Call Graphs

Flow Maps

AppDynamics app agents automatically discover the most common application frameworks and services. Using built-in application detection and configuration settings, agents collect application data and metrics to build Flow Maps.

AppDynamics automatically captures and scores every transaction. Flow Maps present a dynamic visual representation of the components and activities of your monitored application environment in direct context of the time frame that you have selected.

Familiarize yourself with the some of the different features of the Flow Map.

  1. Try using the different layout options (you can also click and drag each icon on the Flow Map to reposition it).
  2. Try using the slider and mouse scrollwheel to adjust the zoom level.
  3. Look at the Transaction Scorecard.
  4. Explore the options for editing the Flow Map.

You can read more about Flow Maps here

Flow Map Components Flow Map Components

Business Transactions

In the AppDynamics model, a Business Transaction represents the data processing flow for a request, most often a user request. In real-world terms, many different components in your application may interact to provide services to fulfill the following types of requests:

  • In an e-commerce application, a user logging in, searching for items or adding items to the cart.
  • In a content portal, a user requests content such as sports, business or entertainment news.
  • In a stock trading application, operations such as receiving a stock quote, buying or selling stocks.
    Because AppDynamics orients performance monitoring around Business Transactions, you can focus on the performance of your application components from the user perspective. You can quickly identify whether a component is readily available or if it is having performance issues. For instance, you can check whether users are able to log in, check out or view their data. You can see response times for users, and the causes of problems when they occur.

You can read more about Business Transactions here and here

Verifying Business Transactions

Verify that Business Transactions are being automatically detected by following these steps.

  1. Click the Business Transactions option on the left menu.
  2. Look at the list of Business Transactions and their performance.

Business Transactions Business Transactions

Snapshots

AppDynamics monitors every execution of a Business Transaction in the instrumented environment, and the metrics reflect all such executions. However, for troubleshooting purposes, AppDynamics takes snapshots (containing deep diagnostic information) of specific instances of transactions that are having problems.

Verify that transaction snapshots are being automatically collected by following these steps.

  1. Click the Application Dashboard option on the left menu.
  2. Click the Transaction Snapshots tab.
  3. Click the Exe Time (ms) column to sort the snapshots with the greatest execution time.
  4. Double-click a Business Transaction snapshot to display the snapshot viewer

Snapshots Snapshots

A transaction snapshot gives you a cross-tier view of the processing flow for a single invocation of a transaction.

The Potential Issues panel highlights slow methods and slow remote service calls and help you investigate the root cause for performance issues.

Drill Downs & Call Graphs

Call graphs and drill downs provide key information, including slowest methods, errors, and remote service calls for the transaction execution on a tier. A drill down may include a partial or complete call graph. Call graphs reflect the code-level view of the processing of the Business Transaction on a particular tier.

In the Flow Map for a Business Transaction snapshot, a tier with a Drill Down link indicates AppDynamics has taken a call graph for that tier.

Drill down into a call graph of the transaction snapshot by following these steps.

  1. Click on a slow call in the Potential Issues list on the left.
  2. Click Drill Down into Call Graph.

Snapshot Drill Down Snapshot Drill Down

The call graph view shows you the following details.

  1. The method execution sequence shows the names of the classes and methods that participated in processing the Business Transaction on this node, in the order in which the flow of control proceeded.
  2. For each method, you can see the time and percentage spent processing and the line number in the source code, enabling you to pinpoint the location in the code that could be affecting the performance of the transaction.
  3. The call graph displays exit call links for methods that make outbound calls to other components such as database queries and web service calls.

You can read more about Transaction Snapshots here

You can read more about Call Graphs here

Call Graph Call Graph

Last Modified Oct 13, 2025

5. Configure Controller Settings

In this exercise you will complete the following tasks:

  • Adjust Business Transaction settings.
  • Adjust Call Graph settings.
  • Observe Business Transaction changes.

Adjust Business Transaction Settings

In the last exercise, you validated that Business Transactions were being auto-detected. There are times when you want to adjust the Business Transaction auto-detection rules to get them to an optimal state. This is the case with our sample application, which is built on an older Apache Struts framework.

The business transactions highlighted in the following image show that each pair has a Struts Action (.execute) and a Servlet type (.jsp). You will be adjusting the settings of the transaction detection rules so that these two types of transactions will be combined into one.

Anytime you see the time frame selector visible in the AppDynamics UI, the view you see will represents the context of the time frame selected. You can choose one of the pre-defined time frames or create your own custom time frame with the specific date and time range you want to view.

  1. Select the last 1 hour time frame.
  2. Use your mouse to hover over the blue icons to see the Entry Point Type of the transaction.

List of Business Transactions List of Business Transactions

Optimize the transaction detection by following these steps:

  1. Click the Configuration option toward the bottom left menu.

  2. Click the Instrumentation link.

    Configure Instrumentation Configure Instrumentation

  3. Select Transaction Detection from the Instrumentation menu.

  4. Select the Java Auto Discovery Rule.

  5. Click Edit.

    Edit Java Rules Edit Java Rules

  6. Select the Rule Configuration tab on the Rule Editor.

  7. Uncheck all the boxes on Struts Action section.

  8. Uncheck all the boxes on Web Service section.

  9. Scroll down to find the Servlet settings.

  10. Check the box Enable Servlet Filter Detection (all three boxes should be checked on Servlet settings).

  11. Click Save to save your changes.

You can read more about Transaction Detection Rules here.

Rule Configuration Rule Configuration
Rule Configuration Cont Rule Configuration Cont

Adjust Call Graph settings

You can control the data captured in call graphs within transaction snapshots with the Call Graph Settings window seen below. In this step you will change the SQL Capture settings so the parameters of each SQL query are captured along with the full query. You can change the SQL Capture settings by following these steps.

  1. Select the Call Graph Settings tab from the Instrumentation window. This is within the Instrumentation settings which we navigated to from the previous exercise.
  2. Ensure you have the Java tab selected within the settings.
  3. Scroll down until you see the SQL Capture Settings.
  4. Click the Capture Raw SQL option.
  5. Click Save.

You can read more about Call Graph settings here.

Call Graph Configuration Call Graph Configuration

Observe Business Transaction changes

It may take up to 30 minutes for the new business transactions to replace the prior transactions. The list of business transactions should look like the following example after the new transactions are detected.

  1. Click on Business Transactions on the left menu.
  2. Adjust your time range picker to look at the last 15 minutes

Updated BTs Updated BTs

Last Modified Oct 13, 2025

6. Troubleshooting Slow Transactions

In this exercise you will complete the following tasks:

  • Monitor the application dashboard and flow map.
  • Troubleshoot a slow transaction snapshot.

Monitor the application dashboard and flow map

In the previous exercises we looked at some of the basic features of the Application Flow Map. Let’s take a deeper look at how we can use the Application Dashboard and Flow Map to immediately identify issues within the application.

  1. Health Rule Violations, Node Health issues, and the health of the Business Transactions will always show up in this area for the time frame you have selected. You can click the links available here to drill down to the details.

  2. The Transaction Scorecard shows you the number and percentage of transactions that are normal, slow, very slow, stalled, and have errors. The scorecard also gives you the high level categories of exception types. You can click the links available here to drill down to the details.

  3. Left-click (single-click) on any of the blue lines connecting the different application components to bring up an overview of the interactions between the two components.

  4. Left-click (single-click) within the colored ring of a Tier to bring up detailed information about that Tier while remaining on the Flow Map.

  5. Hover over the time series on one of the three charts at the bottom of the dashboard (Load, Response Time, Errors) to see the detail of the recorded metrics.

    Flow Map Components Flow Map Components

Now let’s take look at Dynamics Baselines and options for the charts at the bottom of the dashboard.

  1. Compare the metrics on the charts to the Dynamic Baseline that has been automatically calculated for each of the metrics.

  2. The Dynamic Baseline is shown in the load and response time charts as the blue dotted line seen in the following image.

  3. Left-click and hold down your mouse button while dragging from left to right to highlight a spike seen in any of the three charts at the bottom of the dashboard.

  4. Release your mouse button and select one of the three options in the pop-up menu.

    Flow Map Components Flow Map Components

The precision of AppDynamics unique Dynamic Baselining increases over time to provide you with an accurate picture of the state of your applications, their components, and their business transactions, so you can be proactively alerted before things get to a critical state and take action before your end users are impacted.

You can read more about AppDynamics Dynamic Baselines here.

Troubleshoot a slow transaction snapshot

Let’s look at our business transactions and find the one that has the highest number of very slow transactions by following these steps.

  1. Click the Business Transactions option on the left menu.

  2. Click the View Options button.

  3. Check and uncheck the boxes on the options to match what you see in the following image:

    BTs Column Config BTs Column Config

  4. Find the Business Transaction named /Supercar-Trader/car.do and drill into the very slow transaction snapshots by clicking on the number of Very Slow Transactions for the business transaction.

Tip

If the /Supercar-Trader/car.do BT does not have any Very Slow Transactions, find a Business Transaction which has some and click in the number under that column. The screenshots may look slightly different moving forward but the concepts remain the same.

![Very Slow Transaction](images/very-slow-transaction.png)
  1. You should see the list of very slow transaction snapshots. Double-click on the snapshot that has the highest response time as seen below.

    snapshot list snapshot list

    When the transaction snapshot viewer opens, we see the flow map view of all the components that were part of this specific transaction. This snapshot shows the transaction traversed through the components below in order.

    • The Web-Portal Tier.
    • The Api-Services Tier.
    • The Enquiry-Services Tier.
    • The MySQL Database.

    The Potential Issues panel on the left highlights slow methods and slow remote services. While we can use the Potential Issues panel to drill straight into the call graph, we will use the Flow Map within the snapshot to follow the complete transaction in this example.

  2. Click on Drill Down on the Web-Portal Tier shown on the Flow Map of the snapshot.

    Web Portal Drilldown Web Portal Drilldown

    The tab that opens shows the call graph of the Web-Portal Tier. We can see that most of the time was from an outbound HTTP call.

  3. Click on the block to drill down to the segment where the issue happening. Click the HTTP link to the details of the downstream call.

    Call Graph Call Graph

    The detail panel for the downstream call shows that the Web-Portal Tier made an outbound HTTP call to the Api-Services Tier. Follow the HTTP call into the Api-Services Tier.

  4. Click Drill Down into Downstream Call.

    Call Graph Downstream Call Graph Downstream

    The next tab that opens shows the call graph of the Api-Services Tier. We can see that 100% of the time was due to an outbound HTTP call.

  5. Click the HTTP link to open the detail panel for the downstream call.

    Downstream Call Graph Downstream Call Graph

    The detail panel for the downstream call shows that the Api-Services Tier made an outbound HTTP call to the Enquiry-Services Tier. Follow the HTTP call into the Enquiry-Services Tier.

  6. Click Drill Down into Downstream Call.

    API service downstream API service downstream

    The next tab that opens shows the call graph of the Enquiry-Services Tier. We can see that there were JDBC calls to the database that caused issues with the transaction.

  7. Click the JDBC link with the largest time to open the detail panel for the JDBC calls.

    JDBC Callgraph JDBC Callgraph

    The detail panel for the JDBC exit calls shows the specific query that took most of the time. We can see the full SQL statement along with the SQL parameter values.

    DB Call Details DB Call Details

Summary

In this lab, we first used Business Transactions to identify a very slow transaction that required troubleshooting. We then examined the call graph to pinpoint the specific part of the code causing delays. Following that, we drilled down into downstream services and the database to further analyze the root cause of the slowness. Finally, we successfully identified the exact inefficient SQL query responsible for the performance issue. This comprehensive approach demonstrates how AppDynamics helps in isolating and resolving transaction bottlenecks effectively.

Last Modified Oct 13, 2025

7. Troubleshooting Errors & Exceptions

In this exercise, you will learn how to effectively detect and diagnose errors within your application to identify their root causes. Additionally, you will explore how to pinpoint specific nodes that may be underperforming or experiencing errors, and apply troubleshooting techniques to resolve these performance issues. This hands-on experience will enhance your ability to maintain application health and ensure optimal performance.

Find Specific Errors Within Your Application

AppDynamics makes it easy to find errors and exceptions within your application. You can use the Errors dashboard to see transactions snapshots with errors and find the exceptions that are occurring most often. Identifying errors quickly helps prioritize fixes that improve application stability and user experience. Understanding the types and frequency of exceptions allows you to focus on the most impactful issues.

  1. Click on the Troubleshoot option on the left menu.

  2. Click on the Errors option on the left menu. This navigates you to the Errors dashboard where you can quickly identify business transactions with errors

  3. Explore a few of the error transaction snapshots. Reviewing snapshots helps you see the exact context and flow when errors occurred.

  4. Click on the Exceptions tab to see exceptions grouped by type. Grouping by exception type helps identify recurring problems and patterns.

    Errors Dashboard Errors Dashboard

    The Exceptions tab shows you what types of exceptions are occurring the most within the application so you can prioritize remediating the ones having the most impact.

  5. Observe the Exceptions per minute and Exception count (6) to understand error frequency. High frequency exceptions often indicate critical issues needing immediate attention.

  6. Note the Tier where exceptions occur to localize the problem within your application architecture. Knowing the affected tier helps narrow down the root cause.

  7. Double-click on the MySQLIntegrityConstraintViolationException type to drill deeper.

    Exception Dashboard Exception Dashboard

  8. Review the overview dashboard showing snapshots that experienced this exception type.

  9. The tab labeled Stack Traces for this Exception shows you an aggregated list of the unique stack traces generated by this exception type. Stack traces provide the exact code paths causing the error, essential for debugging.

  10. Double-click a snapshot to open it and see the error in context. This shows the transaction flow and pinpoints where the error happened.

    MySQL Exception MySQL Exception

    When you open an error snapshot from the exceptions screen, the snapshot opens to the specific segment within the snapshot where the error occurred.

  11. Notice exit calls in red text indicating errors or exceptions.

  12. Drill into the exit call to view detailed error information.

  13. Click Error Details to see the full stack trace. Full stack traces are critical for developers to trace and fix bugs.

Tip

If you want to learn more about error handling and exceptions, refer to the official AppDynamics documentation in the following link: here.

Call Graph Error Call Graph Error

Troubleshoot Node Issues

Node health directly impacts application performance and availability. Early detection of node issues prevents outages and ensures smooth operation. AppDynamics provides visual indicators throughout the UI, making it easy to quickly identify issues.

You can see indicators of Node issues in three areas on the Application Dashboard.

  1. Observe the Application Dashboard for visual indicators of node problems. Color changes and icons provide immediate alerts to issues

  2. The Events panel shows Health Rule Violations, including those related to Node Health.

  3. The Node Health panel tells you how many critical or warning issues are occurring for Nodes. Click on the Node Health link in the Node Health panel to drill into the Tiers & Nodes dashboard.

    Application Dashboard Application Dashboard

  4. Alternatevely, you can click Tiers & Nodes on the left menu to reach the Tiers & Nodes dashboard

  5. Switch to Grid View for an organized list of nodes. Grid view makes it easier to scan and find nodes with warnings.

  6. Click on the warning icon for the Insurance-Services_Node-01 Node.

    Tiers and Nodes List Tiers and Nodes List

  7. Review the Health Rule Violations summary and click on violation descriptions.

  8. Click on the Details button to see the details.

    Health Rule Violation Health Rule Violation

    The Health Rule Violation details viewer shows you:

  9. The current state of the violation.

  10. The timeline of when the violation was occurring.

  11. The specifics of what the violation is and the conditions that triggered it.

  12. Click on the View Dashboard During Health Rule Violation to see node metrics at the time of the issue. Correlating violations with performance metrics aids diagnosis.

    Health Rule Violation Details Health Rule Violation Details

    When you click on the View Dashboard During Health Rule Violation button, it opens the Server tab of the Node dashboard by default.

    If you haven’t installed the AppDynamics Server Visibility Monitoring agent yet then you won’t see the resource metrics for the host of the Node. You will be able to see those metrics in the next lab. The AppDynamics Java agent collects memory metrics from the JVM via JMX.

    Investigate the JVM heap data using the steps below.

  13. Click on the Memory tab.

  14. Look at the current heap utilization.

  15. Notice the Major Garbage Collections that have been occurring.

Note: If you have an issue seeing the Memory screen, try using an alternate browser (Firefox should render correctly for Windows, Linux, and Mac).

![Memory Dashboard](images/memory-dashboard.png)  
  1. Use the outer scroll bar to scroll to the bottom of the screen.
  2. Note high PS Old Gen memory usage as a potential sign of memory leaks or inefficient garbage collection. Identifying memory pressure early can prevent outages.

You can read more about Node and JVM monitoring here and here.

PS Old Gen PS Old Gen

Summary

In this lab, you learned how to effectively use AppDynamics to identify and troubleshoot application errors and node health issues. You started by locating specific errors and exceptions using the Errors dashboard, understanding their frequency, types, and impact on your application. You drilled down into error snapshots and stack traces to pinpoint the root cause of failures.

Next, you explored node health monitoring by interpreting visual indicators on the Application Dashboard and investigating Health Rule Violations. You learned to analyze JVM memory metrics to detect potential performance bottlenecks related to garbage collection and heap usage.

Together, these skills enable proactive monitoring and rapid troubleshooting to maintain application performance and reliability.

Last Modified Oct 13, 2025

Server Visibility Monitoring

2 minutes  
Prerequsites

This is a continuation of the Application Performance Monitoring lab. Verify that your application is running and has load for the past hour. If needed return to the Generate Application Load section to restart the load generator.

Objectives

In this Lab you will learn about AppDynamics Server Visibility Monitoring and Service Availability Monitoring.

When you have completed this lab, you will be able to:

  • Download the AppDynamics Server Visibility Agent.
  • Install the AppDynamics Server Visibility Agent.
  • Monitor server health.
  • Understand the agent’s extended hardware metrics.
  • Quickly see underlying infrastructure issues impacting your application performance.

Workshop Environment

The lab environment has two hosts:

  • The first host runs the AppDynamics Controller and will be referred to from this point on as the Controller.
  • The second host runs the Supercar Trader application used in the labs. It will be the host where you will install the AppDynamics agents and will be referred to from this point on as the Application VM.

Controller

You will be using the AppDynamics SE Lab Controller for this workshop.

Controller Controller

Application VM

Supercar Trader is a Java-based Web Application

The purpose of Supercar-Trader collection is to generate dynamic traffic (business transactions) for AppDynamics Controller.

Application VM Application VM

Last Modified Oct 13, 2025

Subsections of Server Visibility Monitoring

Deploy Machine Agent

5 minutes  

In this exercise you will perform the following actions:

  1. Run a script that will install the Machine agent
  2. Configure the Machine agent
  3. Start the Machine agent
Note

We will use a script to download the machine agent into your EC2 instance. Normally, you would have to download the machine agent by logging into https://accounts.appdynamics.com/ but due to potential access limitations we will use the script which downloads it directly from the portal. If you have access to the AppDynamics portal and would like to download the machine agent, you can follow the below steps to download it and reference the steps used in the Install Agent section of the APM lab to SCP it into your VM.

  1. Log into the AppDynamics Portal
  2. On the left side menu click on Downloads
  3. Under Type select Machine Agent
  4. Under Operating System Select Linux
  5. Find the Machine Agent Bundle - 64-bit linux (zip) and click on the Download button.
  6. Follow the steps in the Install Agent section to SCP the downloaded file into your EC2 instance.
  7. Unbundle the zip file into the /opt/appdynamics/machineagent directory and proceed to the configuration section of this lab

Run the Install Script

Use the command below to change to the directrory where the script is located. The script will downlaod and unbundle the machine agent

cd /opt/appdynamics/lab-artifacts/machineagent/

Use the command below to run the install script.

chmod +x install_machineagent.sh
./install_machineagent.sh

You should see output similar to the following image.

Install Output Install Output

Configure the Server Agent

Obtain the configuration property values listed below from the Java Agents β€œcontroller-info.xml” and have them available for the next step.

cat /opt/appdynamics/javaagent/conf/controller-info.xml
  • controller-host
  • controller-port
  • controller-ssl-enabled
  • account-name
  • account-access-key

Edit the β€œcontroller-info.xml” file of the machine Agent and insert the values for the properties you obtained from the Java Agent configuration file, listed below.

  • controller-host
  • controller-port
  • controller-ssl-enabled
  • account-name
  • account-access-key

You will need to set the β€œsim-enabled” property to true and then save the file which should look similar to the image below.

cd /opt/appdynamics/machineagent/conf
nano controller-info.xml

Example Config Example Config

Start the Server Visibility agent

Use the following commands to start the Server Visibility agent and verify that it started.

cd /opt/appdynamics/machineagent/bin
nohup ./machine-agent &
ps -ef | grep machine

You should see output similar to the following image.

Example Output Example Output

Last Modified Oct 13, 2025

Monitor Server Health

2 minutes  

In this exercise you will complete the following tasks:

  • Review the Server Main dashboard
  • Review the Server Processes dashboard
  • Review the Server Volumes dashboard
  • Review the Server Network dashboard
  • Navigate between Server and Application contexts

Review the Server Main Dashboard

Now that you have the Machine agent installed, let’s take a look at some of the features available in the Server Visibility module. From your Application Dashboard, click on the Servers tab and drill into the servers main dashboard by following these steps.

  1. Click the Servers tab on the left menu.
  2. Check the checkbox on the left for your server.
  3. Click View Details.

Server Dashboard Server Dashboard

You can now explore the server dashboard. This dashboard enables you to perform the following tasks:

See charts of key performance metrics for the selected monitored servers, including:

  • Server availability
  • CPU, memory, and network usage percentages
  • Server properties
  • Disk, partition, and volume metrics
  • Top 10 processes consuming CPU resources and memory.

You can read more about the Server Main dashboard here.

Review the Top Pane of the dashboard which provides you the following information:

  • Host Id: This is an ID for the server that is unique to the Splunk AppDynamics Controller
  • Health: Shows the overall health of the server.
  • Hierachy: Arbitrary hierarchy to group your severs together. See documentation for additional details here
  1. Click on the health server icon to view the Violations * Anomalies panel. Review the panel to identify potential issues
  2. Click on the Current Health Rule Evaluation Status to see if there are any current issues being alerted on for this server

Server Health Server Health Server violations Server violations

  1. Click on the CPU Usage too high rule
  2. Click on Edit Health Rule. This will open the Edit Health Rule panel

Edit Health Rule Edit Health Rule

This panel gives us the ability to configure the Health Rule. A different lab will go into more details on creating and customizing health rules. For now we will just review the existing rule

  1. Click on the Warning Criteria

Edit Health Rule - Warning Edit Health Rule - Warning

In this example we can see that the warning criteria is set when the CPU is above 5%. This is the reason why our health rule is showing a warning and not a healthy state. Cancel out of the Edit Health Rule panel to get back to the Server Dashboard

Review the Server Processes Dashboard

  1. Click the Processes tab.
  2. Click View Options to select different data columns. Review the KPIs available to view

You can now explore the server processes dashboard. This dashboard enables you to perform the following tasks:

  • View all the processes active during the selected time period. The processes are grouped by class as specified in the ServerMonitoring.yml file.
  • View the full command line that started this process by hovering over the process entry in the Command Line column.
  • Expand a process class to see the processes associated with that class.
  • Use View Options to configure which columns to display in the chart.
  • Change the time period of the metrics displayed.
  • Sort the chart using the columns as a sorting key. You can not sort on sparkline charts: CPU Trend and Memory Trend.
  • See CPU and Memory usage trends at a glance.

You can read more about the Server Processes dashboard here.

Dashboard Processes Dashboard Processes

Review the Server Volumes Dashboard

  1. Click the Volumes tab.

You can now explore the server volumes dashboard. This dashboard enables you to perform the following tasks:

  • See the list of volumes, the percentage used and total storage space available on the disk, partition or volume.
  • See disk usage and I/O utilization, rate, operations per second, and wait time.
  • Change the time period of the metrics collected and displayed.
  • Click on any point on a chart to see the metric value for that time.

You can read more about the Server Volumes dashboard here.

Dashboard Example Dashboard Example

Review the Server Network Dashboard

  1. Click the Network tab.

You can now explore the Server Network dashboard. This dashboard enables you to perform the following tasks:

  • See the MAC, IPv4, and IPv6 address for each network interface.
  • See whether or not the network interface is enabled, functional, its operational state equipped with an ethernet cable that is plugged in, operating in full or half-full duplex mode, maximum transmission unit (MTU) or size (in bytes) of the largest protocol data unit that the network interface can pass, speed of the ethernet connection in Mbit/sec.
  • View network throughput in kilobytes/sec and packet traffic.
  • Change the time period of the metrics displayed.
  • Hover over on any point on a chart to see the metric value for that time.

You can read more about the Server Network dashboard here.

Network Dashboard Network Dashboard

Last Modified Oct 13, 2025

Correlate Between Server and APM

3 minutes  

The Server Visibility Monitoring agent automatically associates itself with any Splunk AppDynamics APM agents running on the same host.

With Server Visibility enabled, you can access server performance metrics in the context of your applications. You can switch between server and application contexts in different ways. Follow these steps to navigate from the server main dashboard to one of the Nodes running on the server.

  1. Click the Dashboard tab to return to the main Server Dashboard.
  2. Click the APM Correlation link.

Server to APM Server to APM

  1. Click the down arrow on one of the listed Tiers.
  2. Click the Node of the Tier link.

Dashboard Example Dashboard Example

You are now on the Node Dashboard.

  1. Click the Server tab to see the related host metrics

Dashboard Example Dashboard Example

When you have the Server Visibility Monitoring agent installed, the host metrics are always available within the context of the related Node.

You can read more about navigating between Server and Application Contexts here.

Last Modified Oct 13, 2025

Business iQ

2 minutes  

Objectives

In this Learning Lab you learn about AppDynamics Business iQ.

When you have completed this lab, you will be able to:

  • Enable analytics with the new Agentless Analytics Java Agent (v 4.5.15 +).
  • Configure HTTP data collectors.
  • Configure method invocation data collectors.
  • Understand dashboard components.
  • Build a business dashboard.

Workshop Environment

The lab environment has two hosts:

  • The first host runs the AppDynamics Controller and will be referred to from this point on as the Controller.
  • The second host runs the Supercar Trader application used in the labs. It will be the host where you will install the AppDynamics agents and will be referred to from this point on as the Application VM.

Controller VM

image image

Application VM

image image

Last Modified Oct 13, 2025

Subsections of Business iQ

Lab Prerequisite

3 minutes  

In this exercise you will complete the following tasks:

  • Access your AppDynamics Controller from your web browser.
  • Verify transaction load to the application.
  • Restart the application and transaction load if needed.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Verify transaction load to the application

Check the application flow map:

  1. Select the last 1 hour time frame.
  2. Verify you see the five different Tiers on the flow map.
  3. Verify there has been consistent load over the last 1 hour.

Verify Load 1 Verify Load 1

Check the list of business transactions:

  1. Click the Business Transactions option on the left menu.
  2. Verify you see the eleven business transactions seen below.
  3. Verify that they have some number of calls during the last hour.

Note: If you don’t see the Calls column, you can click the View Options toolbar button to show that column.

Verify Business transactions Verify Business transactions

Check the agent status for the Nodes:

  1. Click the Tiers & Nodes option on the left menu.
  2. Click Grid View.
  3. Verify that the App Agent Status for each Node is greater than 90% during the last hour.

Verify Agents Verify Agents

Restart the Application and Load Generation if Needed

If any of the checks you performed in the previous steps could not be verified, SSH into your Application VM and follow these steps to restart the application and transaction load.

Use the following commands to stop the running instance of Apache Tomcat.

cd /usr/local/apache/apache-tomcat-9/bin
./shutdown.sh

Use the command below to check for remaining application JVMs still running.

ps -ef | grep Supercar-Trader

If you find any remaining application JVMs still running, kill the remaining JVMs using the command below.

sudo pkill -f Supercar-Trader

Use the following commands to stop the load generation for the application. Wait until all processes are stopped.

cd /opt/appdynamics/lab-artifacts/phantomjs
./stop_load.sh

Restart the Tomcat server:

cd /usr/local/apache/apache-tomcat-9/bin
./startup.sh

Wait for two minutes and use the following command to ensure Apache Tomcat is running on port 8080.

curl localhost:8080
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/9.0.50</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>

    <body>
        <div id="wrapper"
....

Use the following commands to start the load generation for the application.

cd /opt/appdynamics/lab-artifacts/phantomjs
./start_load.sh

You should see output similar to the following image.

Restart App 3 Restart App 3

Last Modified Oct 13, 2025

Enable Analytics on the Application

2 minutes  

Analytics formerly required a separate agent that was bundled with Machine Agent. However, Analytics is now agentless and embedded in the APM Agent for both .NET Agent >= 20.10 and Java Agent >= 4.5.15 on Controllers >= 4.5.16

In this exercise you will access your AppDynamics Controller from your web browser and enable the Agentless Analytics from there.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

  1. ** Select the Analytics tab at the top left of the screen.
  2. ** Select the Configuration Left tab.
  3. ** Select the Transaction Analytics - Configuration tab.
  4. ** Mark the Checkbox next to Your Application Supercat-Trader-YOURINITIALS
  5. ** Click the Save button

Enable Analytics Enable Analytics

Validate Transaction Summary

You want to verify that Analytics is working for that application and showing transactions.

  1. Select the Analytics tab tab on the left menu.
  2. Select the Home tab.
  3. Under Transactions from filter to your application Supercar-Trader-YOURINITIALS

Validate Analytics Validate Analytics

Last Modified Oct 13, 2025

Configure HTTP Data Collectors

2 minutes  

Data collectors enable you to supplement business transaction and transaction analytics data with application data. The application data can add context to business transaction performance issues. For example, they show the values of particular parameters or return value for business transactions affected by performance issues, such as the specific user, order, or product.

HTTP data collectors capture the URLs, parameter values, headers, and cookies of HTTP messages that are exchanged in a business transaction.

In this exercise you will perform the following tasks:

  • Enable all HTTP data collectors.
  • Observe and Select relevant HTTP data collectors.
  • Capture Business Data in Analytics using HTTP Params.
  • Validate Analytics on HTTP Parameters.

Enable all HTTP data collectors

Initially, you can capture all HTTP data collectors to learn which useful parameters you can capture into Analytics and use it in your Dashboards

Tip

It is strongly recommended that you perform this step on a UAT environment, not production.

  1. Select the Applications tab at the top left of the screen.
  2. Select the Supercar-Trader-YOURINITIALS Application.
  3. Select the Configuration Left tab.
  4. Click on the Instrumentation Link.
  5. Select the Data Collectors tab.
  6. Click on the Add Button in the HTTP Request Data Collectors

HTTPDataCollectors 1 HTTPDataCollectors 1

You will now configure an HTTP data collector to capture all HTTP Parameters. You will only enable it on Transaction Snapshots to avoid any overheads until you identify the precise parameters that you need for Transaction Analytics

  1. For the Name, specify All HTTP Param.
  2. Under Enable Data Collector for check the box for Transaction Snapshots.
  3. Do not enable Transaction Analytics.
  4. Click on + Add in the HTTP Parameters section.
  5. For the new Parameter, specify All as the Display Name
  6. Then specify an asterisk * in the HTTP Parameter name.
  7. Click Save

HTTPDataCollectors 2 HTTPDataCollectors 2

  1. Click “Ok” to confirm the data collector.
  2. Enable /Supercar-Trader/sell.do Transaction
  3. Click Save

HTTPDataCollectors 2 HTTPDataCollectors 2

Observe and Select Relevant HTTP Data Collectors

  1. Apply load on the Application, specifically the SellCar transaction. Open one of its snapshots with Full Call Graph, and select the Data Collectors Tab.

You can now see all HTTP Parameters. You will see a number of key metrics, such as Car Price, Color, Year, and more.

  1. Note the exact Parameter names to add them again in the HTTP Parameters list and enable them in Transaction Analytics.
  2. Once they are added, delete the All HTTP Param HTTP data collector.

HTTPDataCollectors 2 HTTPDataCollectors 2

Capture Business Data in Analytics with HTTP Params

You will now configure the HTTP data collector again, but this time you will capture only the useful HTTP Parameters and enable them in Transaction Analytics. Add a new HTTP Data Collector: Application -> Configuration -> Instrumentation -> Data Collector tab -> Click Add Under the HTTP Request Data Collectors section

  1. In the Name, specify CarDetails.
  2. Enable Transaction Snapshots.
  3. Enable Transaction Analytics.
  4. Click + Add in the HTTP Parameters section.
  5. For the new Parameter, specify CarPrice_http as the Display Name
  6. Then specify carPrice as the HTTP Parameter name.
  7. Repeat for the rest of the Car Parameters as shown below.
  8. Click Save
  9. Click Ok to acknowledge the Data Collector implementation

SaveHttpDataCollectors SaveHttpDataCollectors Car Params Car Params

  1. Enable /Supercar-Trader/sell.do Transaction
  2. Click Save

HTTPDataCollectors 2 HTTPDataCollectors 2

  1. Delete the All HTTP Param Collector by Clicking on it, then click Delete button.

Validate Analytics on HTTP Parameters

You will now validate whether the business data was captured by HTTP data collectors in AppDynamics Analytics.

  1. Select the Analytics tab at the top left of the screen.
  2. Select the Searches tab
  3. Click the + Add button and create a new Drag and Drop Search.

Drag and Drop Search Drag and Drop Search

  1. Click + Add Criteria
  2. Select Application and Search For Your Application Name Supercar-Trader-YOURINITIALS
  3. Under the Fields panel verify that the Business Parameters appear as a field in the Custom HTTP Request Data.
  4. Check the box for CarPrice_http and Validate that the field has data.

ValidateHttpDataCollectors ValidateHttpDataCollectors

Last Modified Oct 13, 2025

Configure Method Data Collectors

2 minutes  

Method invocation data collectors capture code data such as method arguments, variables, and return values. If HTTP data collectors don’t have sufficient business data, you can still capture these information from the code execution.

In this exercise you will perform the following tasks:

  • Discover methods.
  • Open a discovery session.
  • Discover method parameters.
  • Drill down to an object within the code.
  • Create a method invocation data collector.
  • Validate analytics on method invocation data collectors.

Open a Discovery Session

You may not have an application developer available to identify the method or parameters from the source code. However, there is an approach to discover the application methods and objects directly from AppDynamics.

  1. Select the Applications tab at the top left of the screen.
  2. Select Supercar-Trader-YOURINITIALS application
  3. Select the Configuration tab.
  4. Click on the Instrumentation link.
  5. Select the Transaction Detection tab.
  6. Click on the Live Preview Button on the tight.

OpenDiscoverySession OpenDiscoverySession

  1. Click on Start Discovery Session button
  2. Select the Web-Portal Node in the pop-up windows. It should be the same node that the method you are investigating runs on
  3. Click Ok

OpenDiscoverySession OpenDiscoverySession

  1. Select Tools on the right toggle.
  2. Select Classes/Methods in the drop-down list.
  3. Select Classes with name in the Search section.
  4. Type in the class name supercars.dataloader.CarDataLoader in the text box. To find the class name you can search through call graphs, or ideally find it in the source code.
  5. Click Apply to search for the matching class methods.
  6. Once the results appear, expand the class that matches your search.
  7. Look for the same method saveCar.

OpenDiscoverySession OpenDiscoverySession

Note that the saveCar method takes a CarForm object as an input parameter.

Drill Down to the Object

Now you have found the method, explore its parameters to find out where you can pull the car details properties.

You saw that saveCar method takes the complex object CarForm as an input parameter. This object will hold the form data that was entered on the application webpage. Next, you need to inspect that object and find out how you can pull the car details from it.

  1. Type in the class name of the input object supercars.form.CarForm in the text box
  2. Click Apply to search for the class methods.
  3. When the results appear, expand the supercars.form.CarForm class that matches the search.
  4. Look for the methods that will return the car details that you want. You will find get methods for price, model, color, and more.

ObjectDrillDown ObjectDrillDown

Create Method Invocation Data Collector

With the findings from the previous exercises, you can now configure a method invocation data collector to pull the car details directly from the running code in runtime.

  1. Select the Applications tab.
  2. Select Supercar-Trader-YOURINITIALS Application
  3. Select the Configuration tab.
  4. Click on the Instrumentation link.
  5. Select the Data Collectors tab.
  6. Click Add in the Method Invocation Data Collectors.

MIDCDataCollector MIDCDataCollector

We will create a method invocation data collector to capture the car details.

  1. For the Name, specify SellCarMI-YOURINITIALS.
  2. Enable Transaction Snapshots.
  3. Enable Transaction Analytics.
  4. Select with a Class Name that.
  5. Add supercars.dataloader.CarDataLoader as the Class Name.
  6. Add saveCar as the Method Name.

NewMIDCDataCollector NewMIDCDataCollector

Then as observed, the Input Parameter of Index 0 in SaveCar method was an on Object of Class CarForm, and then there is a Getter method inside that object that returns the car details properties such as getPrice().

So to explain that how we fetched that value in the MIDC, we will do the below:

  1. Click on Add at the bottom of the MIDC panel, to specify the new data that you want to collect.
  2. In the Display Name, specify CarPrice_MIDC
  3. In the Collect Data From, select Method Parameter of Index 0, which is our CarForm Object.
  4. For the Operation on Method Parameter, select Use Getter Chain. You will be calling a method inside CarForm to return the car details.
  5. Then specify getPrice(), the Getter method inside the CarForm class that will return the price.
  6. Click Save.

CreateMIDCDataCollector1 CreateMIDCDataCollector1

  1. Repeat the above steps for all the properties, including color, model, and any others that you want to collect data for.

CreateMIDCDataCollector2 CreateMIDCDataCollector2

  1. Save MIDC, and apply to the ”/Supercar-Trader/sell.do” business transaction.

The implementation of the MIDC requires that we restart the JVM:

  1. SSH into your EC2 instance
  2. Shutdown the Tomcat Sever
cd /usr/local/apache/apache-tomcat-9/bin
./shutdown.sh

If you find any remaining application JVMs still running, kill the remaining JVMs using the command below.

sudo pkill -f Supercar-Trader
  1. Restart the Tomcat Server
./startup.sh
  1. Validate that the Tomcat server is running, this can take a few minutes
curl localhost:8080
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/9.0.50</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>

    <body>
        <div id="wrapper"
....

Validate analytics on MD parameters

Go to the website and apply some manual load on the Sell Car Page by submitting the form a couple of times.

You will now verify if the business data was captured by HTTP data collectors in AppDynamics Analytics.

  1. Select the Analytics tab.
  2. Select the Searches tab and Add a new Drag and Drop Search.
  3. Click the + Add button and create a new Drag and Drop Search.
  4. Click + Add Criteria
  5. Select Application and Search For Your Application Name Supercar-Trader-YOURINITIALS
  6. Verify that the Business Parameters appear as a field in the Custom Method Data.
  7. Verify that the CarPrice Field has data.

ValidateMIDCDataCollector ValidateMIDCDataCollector

Conclusion

You have now captured the business data from the Sell Car transaction from the node at runtime. This data can be used in the Analytical and Dashboard features within AppDynamics to provide more context to the business and measure IT impact on the business.

Last Modified Oct 13, 2025

Dashboard Components

2 minutes  

The ability to build dashboards is a vital component of the AppDynamics capabilities and value. In this exercise,you will work with some of the dashboard components that can be used to build compelling dashboards.

Create a new dashboard

  1. Select the Dashboard & Reports tab.
  2. Click Create Dashboard.
  3. Enter a dashboard name such as SuperCar-Dashboard-YOURINITIALS.
  4. Select Absolute Layout as the Canvas Type.
  5. Click OK.

NewDashboard NewDashboard

Now open the newly created empty dashboard. You will now add various widget types.

Dashboard components custom widget builder

The custom widget builder is a highly flexible tool that can generate representations of data, including numeric vews, time series, pie charts, and more. It is based on the AppDynamics AD Query Language.

To create a widget, follow these steps:

  1. Toggle the Edit Mode at the upper left corner of the dashboard.
  2. Click Add Widget.
  3. Select the Analytics tab on the left.
  4. Click Custom Widget Builder.

NewCustomWidgetBuilder NewCustomWidgetBuilder

There are many chart types that you can create in the Customer Widget Builder. You can simply drag or drop information or create an AD Query in the Advanced pane..

NewCustomWidgetBuilder NewCustomWidgetBuilder

For now, we will cover Numeric, Bar and Pie Charts.

Numeric charts

Exercise: Quantifying the dollar amount impacted by errors enables you to show the impact of IT performance on the business revenue.

  1. Select the Numeric chart type.
  2. Add a Filter on the Application field and Select your application name: Supercar-Trader-YOURINITIALS
  3. Add a filter on the /Supercar-Trader/sell.do business transactions.
  4. Add a filter on the User Experience Field selecting only the Error to show the impact of errors.
  5. Find the CarPrice_MIDC field on the left panel and drag and drop it into the Y-Axis. Notice that SUM is the Aggregation used to capture the total price per model.
  6. Change the font color to red for better visibility.
  7. Click Save.

NumericChartWidget NumericChartWidget

Note that you could do the same for the $ Amount Transacted Successfully criterion by changing the user experience filter to only include NORMAL, SLOW and VERY SLOW.

You could also baseline this metric by creating a custom metric in the Analytics module and defining a health rule that indicates if the $ Amount Impacted is equal to or higher than the baseline. You can also add a label for the currency.

NumericChartSamples NumericChartSamples

Bar charts

Exercise: You will now create a bar chart to visualize Top Impacted Car Models. The chart will show the car models of all of the SellCar transactions, categorized by the User Experience.

  1. Create a New Widget by clicking + Add Widget, Analytics and Custom Widger Builder
  2. Select the Column chart type.
  3. Add the following filters: Application = Supercar-Trader-YOURINITIALS and Business Transaction = /Supercar-Trader/sell.do.
  4. Add CarModel_MIDC and User Experience to the X-Axis.
  5. Click Save.

BarChartWidget BarChartWidget

This chart type can be adjusted based on your need. For example, you could group the X-AXIS by Customer Type, Company, Organization, and more. Refer to the following example.

BarChartSamples BarChartSamples

Pie charts

You will now create a pie chart that shows all the car models reported by the sellCar transaction and the sum of prices per model. This will show the most highly-demanded model in the application.

  1. Create a new Widget
  2. Select the Pie chart type.
  3. Add the following filters: Application = Supercar-Trader-YOURINITIALS and Business Transaction = /Supercar-Trader/sell.do.
  4. Add CarModel_MIDC in the X-Axis
  5. Add CarPrice_MIDC in the Y-Axis. Note that SUM is the aggregation used to capture the total price per model.
  6. Add a Title Sold by Car Model
  7. Click Save.

PieChartWidget PieChartWidget

Refer to the following example for more uses of the pie chart widget.

PieChartSamples PieChartSamples

Dashboard components: Conversion funnels

Conversion funnels help visualize the flow of users or events through a multi-step process. This enables you to better understand which steps can be optimized for more successful convergence. You can also use conversion funnels to examine the IT perfomance of every step, to understand how they impact the user experience and identify the cause of user drop-offs.

Note that the funnel is filtered according to the users who executed this path in that specific order, not the total visits per step.

The first step of funnel creation is to select a unique identifier of the transaction that can represent each user navigation through the funnel. Usually, the Session ID is the best choice, since it persists through each step in the funnel.

A Session ID can be captured from the transactions. You’ll need a SessionId data collector to use it as a counter for the Funnel transactions.

For Java applications, AppDynamics has the capability to Session IDs in the default HTTP data collector. You’ll ensure that it is enabled and apply it to all business transactions to capture the Session ID for every transaction.

  1. Select the Applications tab.
  2. Select Supercar-Trader-YOURINITIALS Application.
  3. Select the Configuration Left tab.
  4. Click Instrumentation.
  5. Select the Data Collectors tab.
  6. Edit the Default HTTP Request Request Data Collectors.
  7. Select Transaction Analytics.
  8. Verify that SessionID is selected.
  9. Click Save.

EnableSessionId EnableSessionId

Now apply some load by navigating multiple times from the /Supercar-Trader/home.do page. Then, directly navigate to /Supercar-Trader/sell.do page on the application.

Now Return to your Dashboard to create the funnel widget.

  1. Toggle the Edit slider.
  2. Click Add Widget.
  3. Select the Analytics tab.
  4. Click Funnel Analysis.
  5. Select Transactions from the drop-down list.
  6. Under Count Distinct of, select uniqueSessionId from the drop-down list.
  7. Click Add Step. Name it Home Page.
  8. Click on Add Criteria. Add the following criteria: Application: Supercar-Trader-YOURINITIALS & Business Transactions: /Supercar-Trader/home.do.
  9. Click Add Step. Name it SellCar Page.
  10. Click on Add Criteria. Add the following criteria: Application: Supercar-Trader-YOURINITIALS & Business Transactions: /Supercar-Trader/sell.do.
  11. Select the Show Health Checkbox on the right panel to visualize the transaction health in the flow map.
  12. Click Save

FunnelWidget FunnelWidget

Last Modified Oct 13, 2025

Build Your Dashboard

20 minutes  

Exercise - Build Your Own Dashboard

To conclude this Learning Lab, use the business data that was captured in the earlier exercise using method invocation data collectors and your understanding of the dashboard components to build an IT Business Impact Dashboard.

Refer to the following example and build your own dashboard, using the same data and widgets.

DiscoverCallGraphMethods 1 DiscoverCallGraphMethods 1

Congratulations! You have completed the BusinessIQ Fundamentals Learning Lab!

Last Modified Oct 13, 2025

Browser Real User Monitoring (BRUM)

2 minutes  

Objectives

In this Learning Lab you learn how to use AppDynamics to monitor the health of your browser-based application.

When you have completed this lab, you will be able to:

  • Create a browser application in the Controller
  • Configure the Browser Real User Monitoring (BRUM) agent to monitor your web application’s health.
  • Troubleshoot performance issues and find the root cause, whether it occurs on the browser side or the server side of the transaction.

Workshop Environment

The workshop environment has two hosts:

  • The first host runs the AppDynamics Controller and will be referred to from this point on as the Controller.
  • The second host runs the Supercar Trader application used in the labs. It will be the host where you will install the AppDynamics agents and will be referred to from this point on as the Application VM.

Controller

You will be using the AppDynamics SE Lab Controller for this workshop.

Controller Controller

Application VM

Supercar Trader is a Java-based Web Application

The purpose of Supercar-Trader collection is to generate dynamic traffic (business transactions) for AppDynamics Controller.

Application VM Application VM

Last Modified Oct 13, 2025

Subsections of Browser Real User Monitoring (BRUM)

BRUM Lab Prerequisits

2 minutes  

In this exercise you will complete the following tasks:

  • Access your AppDynamics Controller from your web browser.
  • Verify transaction load to the application.
  • Restart the application and transaction load if needed.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Verify transaction load to the application

Check the application flow map:

  1. Select the last 1 hour time frame.
  2. Verify you see the five different Tiers on the flow map.
  3. Verify there has been consistent load over the last 1 hour.

Verify Load 1 Verify Load 1

Check the list of business transactions:

  1. Click the Business Transactions option on the left menu.
  2. Verify you see the eleven business transactions seen below.
  3. Verify that they have some number of calls during the last hour.

Note: If you don’t see the Calls column, you can click the View Options toolbar button to show that column.

Verify Business transactions Verify Business transactions

Check the agent status for the Nodes:

  1. Click the Tiers & Nodes option on the left menu.
  2. Click Grid View.
  3. Verify that the App Agent Status for each Node is greater than 90% during the last hour.

Verify Agents Verify Agents

Restart the Application and Load Generation if Needed

If any of the checks you performed in the previous steps could not be verified, SSH into your Application VM and follow these steps to restart the application and transaction load.

Use the following commands to stop the running instance of Apache Tomcat.

cd /usr/local/apache/apache-tomcat-9/bin
./shutdown.sh

Use the command below to check for remaining application JVMs still running.

ps -ef | grep Supercar-Trader

If you find any remaining application JVMs still running, kill the remaining JVMs using the command below.

sudo pkill -f Supercar-Trader

Use the following commands to stop the load generation for the application. Wait until all processes are stopped.

cd /opt/appdynamics/lab-artifacts/phantomjs
./stop_load.sh

Restart the Tomcat server:

cd /usr/local/apache/apache-tomcat-9/bin
./startup.sh

Wait for two minutes and use the following command to ensure Apache Tomcat is running on port 8080.

sudo netstat -tulpn | grep LISTEN

You should see output similar to the following image showing that port 8080 is in use by Apache Tomcat.

Restart App 1 Restart App 1

Use the following commands to start the load generation for the application.

cd /opt/appdynamics/lab-artifacts/phantomjs
./start_load.sh

You should see output similar to the following image.

Restart App 3 Restart App 3

Last Modified Oct 13, 2025

Create Browser Application

2 minutes  

In this exercise you will complete the following tasks:

  • Access your AppDynamics Controller from your web browser.
  • Create the Browser Application in the Controller.
  • Configure the Browser Application.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Create the Browser Application in the Controller

Use the following steps to create your new browser application.

Note

It is very important that you create a unique name for your browser application in Step 5 below.

  1. Click the User Experience tab on the top menu.
  2. Click the Browser Apps option under User Experience.
  3. Click Add App.
  4. Choose the option Create an Application manually.
  5. Type in a unique name for your browser application in the format Supercar-Trader-Web-<your_initials_or_name>-<four_random_numbers>
    • Example 1: Supercar-Trader-Web-JFK-3179
    • Example 2: Supercar-Trader-Web-JohnSmith-0953
  6. Click OK.

Create App Create App

You should now see the Browser App Dashboard for the Supercar-Trader-Web-##-#### application.

  1. Click the Configuration tab on the left menu.
  2. Click the Instrumentation option.

Instrumentation Instrumentation

Change the default configuration to have the IP Address stored along with the data captured by the browser monitoring agent by following these steps.

  1. Click the Settings tab.
  2. Use the scroll bar on the right to scroll to the bottom of the screen.
  3. Check the Store IP Address check box.
  4. Click Save.

You can read more about configuring the Controller UI for Browser RUM here.

IPAddress Config IPAddress Config

Last Modified Oct 13, 2025

Configure Agent Injection

3 minutes  

In this exercise you will complete the following tasks:

  • Enable JavaScript Agent injection.
  • Select Business Transactions for injection.

Enable JavaScript Agent injection

While AppDynamics supports various methods for injecting the JavaScript Agent, you will be using the Auto-Injection method in this lab. Follow these steps to enable the Auto-Injection of the JavaScipt Agent.

  1. Click the Applications tab on the left menu and drill into your Supercar-Trader-## application.
  2. Click the Configuration tab on the left menu at the bottom.
  3. Click the User Experience App Integration option.

BRUM Dash 1 BRUM Dash 1

  1. Click the JavaScript Agent Injection tab.
  2. Click Enable so that it turns blue.
  3. Ensure that Supercar-Trader-Web-##-#### is the selected browser app. Choose the application that you created in the previous section
  4. Check the Enable check box under Enable JavaScript Injection
  5. Click Save.

BRUM Dash 2 BRUM Dash 2

It takes a few minutes for the Auto-Injection to discover potential Business Transactions. While this is happening, use these steps to enable the Business Transaction Correlation. For newer APM agents this is done automatically

  1. Click the Business Transaction Correlation tab.
  2. Click the Enable button under the Manually Enable Business Transactions section.
  3. Click Save.

BRUM Dash 3 BRUM Dash 3

Select Business Transactions for injection

Use the following steps to select the Business Transactions for Auto-Injection.

  1. Click the JavaScript Agent Injection tab.
  2. Type .do in the search box.
  3. Click the Refresh List link for the Business Transactions until all 9 BTs show up.
  4. Select all Business Transactions from the right list box.
  5. Click the arrow button to move them to the left list box.
  6. Ensure that all Business Transactions are moved into the left list box.
  7. Click Save.

You can read more about configuring Automatic Injection of the JavaScript Agent here.

BRUM Dash 5 BRUM Dash 5

Wait a few minutes for load to start showing up in your Browser Application.

Last Modified Oct 13, 2025

Monitor and Troubleshoot - Part 1

2 minutes  

In this exercise you will complete the following tasks:

  • Review the Browser Application Overview Dashboard
  • Review the Browser Application Geo Dashboard
  • Review the Browser Application Usage Stats Dashboard
  • Navigate the Supercar-Trader application web pages

Review the Browser Application Overview Dashboard

Navigate to the User Experience dashboard and drill into the browser application overview dashboard by following these steps.

  1. Click the User Experience tab on the left menu.
  2. Search for you Web Application Supercar-Trader-Web-##-###.
  3. Click Details or double click on your application name

BRUM Dash 1 BRUM Dash 1

The Overview dashboard displays a set of configurable widgets. The default widgets contain multiple graphs and lists that feature common high-level indicators of application performance, including:

  • End User Response Time Distribution
  • End User Response Time Trend
  • Total Page Requests by Geo
  • End User Response Time by Geo
  • Top 10 Browsers
  • Top 10 Devices
  • Page Requests per Minute
  • Top 5 Pages by Total Requests
  • Top 5 Countries by Total Page Requests

Explore the features of the dashboard.

  1. Click + to choose additional graphs and widgets to add to the dashboard.
  2. Click and drag the bottom right corner of any widget to resize it.
  3. Select the outlined area in any widget to move and place it on the dashboard.
  4. Click on the title of any widget to drill into the detail dashboard.
  5. Click X in the top right corner of any widget to remove it from the dashboard.

Any changes you make to the dashboard widget layout will automatically be saved.

You can read more about the Browser Application Overview dashboard here.

BRUM Dash 2 BRUM Dash 2

Review the Browser Application Geo Dashboard

The Geo Dashboard displays key performance metrics by geographic location based on page loads. The metrics displayed throughout the dashboard are for the region currently selected on the map or in the grid. The Map view displays load circles with labels for countries that are in the key timing metrics given in the right panel. Some countries and regions, however, are only displayed in the grid view.

Navigate to the Browser Application Geo dashboard and explore the features of the dashboard described below.

  1. Click the Geo Dashboard option.
  2. Click on one of the load circles to drill down to the region.
  3. Hover over one of the regions to show the region details.
  4. Use the zoom slider to adjust the zoom level.
  5. Click Configuration to explore the map options.
  6. Switch between the grid view and map view.

You can read more about the Browser Application Geo dashboard here.

BRUM Dash 3 BRUM Dash 3

Review the Browser Application Usage Stats Dashboard

The Usage Stats dashboard presents an aggregated page-load usage data based on your users browser type and device/platform.

The Browser Application Usage Stats dashboard helps you discover:

  • The slowest browsers in terms of total end-user response time.
  • The slowest browsers to render the response page.
  • The browsers that most of your end users use.
  • The browsers that most of your end users use in a particular country or region.

Navigate to the Browser Application Usage Stats dashboard and explore the features of the dashboard described below.

  1. Click the Usage Stats option.
  2. Click the Show Versions option.
  3. Look at the different browsers and versions by load.
  4. Hover over the sections in the pie chart to see the details.

BRUM Dash 4 BRUM Dash 4

Use these steps to explore more metrics by browser and version.

  1. Use the scroll bar on the right to scroll to the bottom of the page.
  2. Explore the available metrics by browser and version.
  3. Explore the available metrics by country.

BRUM Dash 5 BRUM Dash 5

Navigate to the Devices dashboard and explore the features of the dashboard described below.

  1. Click the Devices option.
  2. Look at the load by device break out.
  3. Hover over the sections in the pie chart to see the details.
  4. Explore the available performance metrics by device.

You can read more about the Browser Application Usage Stats dashboard here.

BRUM Dash 6 BRUM Dash 6

Now that you have the Browser Real User Monitoring agent configured and explored the first series of features, let’s generate some additional load and record your unique browser session by navigating the web pages of the Supercar-Trader application.

Open the main page of the app with your web browser. In the example URL below, substitute the IP Address or fully qualified domain name of your Application VM.

http://[application-vm-ip-address]:8080/Supercar-Trader/home.do

You should see the home page of the application.

App Page 1 App Page 1

Open the listing of available Ferraris.

  1. Click on the Supercars tab on the top menu.
  2. Clicking on the Ferrari logo.

App Page 2 App Page 2

You should see the list of Ferraris.

App Page 3 App Page 3

Click on the image of the first Ferrari.

  1. Click View Enquiries.
  2. Click Enquire.

App Page 4 App Page 4

Submit an enquiry for the car.

  1. Complete the fields on the enquiry form with appropriate data.
  2. Click Submit.

App Page 5 App Page 5

Search for cars and continue browsing the site.

  1. Click on the Search tab on the top menu.
  2. Type the letter A into the search box and click Search.
  3. Click on the remaining tabs to explore the web site.

App Page 6 App Page 6

Last Modified Oct 13, 2025

Monitor and Troubleshoot - Part 2

2 minutes  

In this exercise you will complete the following tasks:

  • Review the Browser Session you created.
  • Review the Pages & AJAX Requests Dashboard.
  • Review the Dashboard for a specific Base Page.
  • Troubleshoot a Browser Snapshot.

Review the Browser Session you created

You can think of sessions as a time-based context to analyze a user’s experience interacting with an application. By examining browser sessions, you can understand how your applications are performing and how users are interacting with them. This enables you to better manage and improve your application, whether that means modifying the UI or optimizing performance on the server side.

Navigate to the Sessions dashboard and find the browser session that you created in the last exercise from navigating the pages of the web application. Follow these steps.

Note

You may need to wait ten minutes after you hit the last page in the web application to see your browser session show up in the sessions list. If you don’t see your session after ten minutes, this could be due to a problem with the Java Agent version in use.

  1. Click the Sessions tab on the left menu.
  2. Check the IP Address in the Session Fields list.
  3. Find the session you created by your IP Address.
  4. Click on your session, then click View Details.

BRUM Dash 1 BRUM Dash 1

Once you find and open the session you created, follow these steps to explore the different features of the session view.

Note: Your session may not have a View Snapshot link in any of the pages (as seen in step five). You will find a session that has one to explore later in this exercise.

  1. Click the Session Summary link to view the summary data.
  2. When you click on a page listed on the left, you see the details of that page on the right.
  3. You can always see the full name of the page you have selected in the left list.
  4. Click on a horizontal blue bar in the waterfall view to show the details of that item.
  5. Some pages may have a link to a correlated snapshot that was captured on the server side.
  6. Click the configuration icon to change the columns shown in the pages list.

You can read more about the Browser RUM Sessions here.

BRUM Dash 2 BRUM Dash 2

Review the Pages & AJAX Requests Dashboard

Navigate to the Pages & AJAX Requests dashboard, review the options there, and open a specific Base Page dashboard by following these steps.

  1. Click the Pages & AJAX Requests tab on the left menu.
  2. Explore the options on the toolbar.
  3. Click the localhost:8080/supercar-trader/car.do page.
  4. Click Details to open the Base Page dashboard.

BRUM Dash 3 BRUM Dash 3

Review the Dashboard for a specific Base Page

At the top of the Base Page dashboard you will see key performance indicators, End User Response Time, Load, Cache Hits, and Page Views with JS errors across the period selected in the timeframe dropdown from the upper-right side of the Controller UI. Cache Hits indicates a resource fetched from a cache, such as a CDN, rather than from the source.

In the Timing Breakdown section you will see a waterfall graph that displays the average times needed for each aspect of the page load process. For more information on what each of the metrics measures, hover over its name on the left. A popup appears with a definition. For more detailed information, see Browser RUM Metrics.

Review the details for the localhost:8080/supercar-trader/car.do Base Page by following these steps.

  1. Change the timeframe dropdown to last 2 hours.
  2. Explore the key performance indicators.
  3. Explore the metrics on the waterfall view.
  4. Use the vertical scroll bar to move down the page.
  5. Explore the graphs for all of the KPI Trends.

You can read more about the Base Page dashboard here.

BRUM Dash 4 BRUM Dash 4

Troubleshoot a Browser Snapshot

Note

Your application may not have any browser snapshots as such you will not be able to follow the entire workflow. You can switch to the browser application AD-Ecommerce-Browser if you would like to follow this section with a different demo application

Navigate to the Browser Snapshots list dashboard and open a specific Browser Snapshot by following these steps.

  1. Click the Browser Snapshots option.
  2. Click the End User Response Time column header twice to show the largest response times at the top.
  3. Click on a browser snapshot that has a gray or blue icon in the third column from the left.
  4. Click Details to open the browser snapshot.

BRUM Dash 6 BRUM Dash 6

Once you open the browser snapshot, review the details and find root cause for the large response time by following these steps.

  1. Review the waterfall view to understand where the response time was impacted.
  2. Notice the extended Server Time metric. Hover over the label for Server Time to understand its meaning.
  3. Click the server side transaction that was automatically captured and correlated to the browser snapshot.
  4. Click View Details to open the associated server side snapshot.

BRUM Dash 7 BRUM Dash 7

Once you open the correlated server side snapshot, use the steps below to pinpoint the root cause of the performance degredation.

  1. You can see that the percentage of transaction time spent in the browser was minimal.
  2. The timing between the browser and the Web-Portal Tier represents the initial connection from the browser until the full response was returned.
  3. You will see that the JDBC call was taking the most time.
  4. Click Drill Down to look at the code level view inside the Enquiry-Services Tier.

BRUM Dash 8 BRUM Dash 8

Once you open the snapshot segment for the Enquiry-Services Tier, you can see that there were JDBC calls to the database that caused issues with the transaction.

  1. Click the JDBC link with the largest time to open the detail panel for the JDBC calls.
  2. The detail panel for the JDBC exit calls shows the specific query that took most of the time.
  3. You can see the full SQL statement along with the SQL parameter values

You can read more about the Browser Snapshots here and here.

BRUM Dash 9 BRUM Dash 9

Last Modified Oct 13, 2025

Database Monitoring

2 minutes  

Objectives

In this Lab you learn about AppDynamics Database Visibility Monitoring.

When you have completed this lab, you will be able to:

  • Download the AppDynamics Database Visibility Agent.
  • Install the AppDynamics Database Visibility Agent.
  • Configure a Database Collector in the Controller.
  • Monitor the health of your databases.
  • Troubleshoot database performance issues.

Workshop Environment

The lab environment has two hosts:

  • The first host runs the AppDynamics Controller and will be referred to from this point on as the Controller.
  • The second host runs the Supercar Trader application used in the labs. It will be the host where you will install the AppDynamics agents and will be referred to from this point on as the Application VM.

Controller VM

You will be using the AppDynamics SE Lab Controller for this workshop.

Controller Controller

Application VM

Supercar Trader is a Java-based Web Application

The purpose of Supercar-Trader collection is to generate dynamic traffic (business transactions) for AppDynamics Controller.

Application Application

Last Modified Oct 13, 2025

Subsections of Database Monitoring

Lab Prerequisite

3 minutes  

In this exercise you will complete the following tasks:

  • Access your AppDynamics Controller from your web browser.
  • Verify transaction load to the application.
  • Restart the application and transaction load if needed.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Verify transaction load to the application

Check the application flow map:

  1. Select the last 1 hour time frame.
  2. Verify you see the five different Tiers on the flow map.
  3. Verify there has been consistent load over the last 1 hour.

Verify Load 1 Verify Load 1

Check the list of business transactions:

  1. Click the Business Transactions option on the left menu.
  2. Verify you see the eleven business transactions seen below.
  3. Verify that they have some number of calls during the last hour.

Note: If you don’t see the Calls column, you can click the View Options toolbar button to show that column.

Verify Business transactions Verify Business transactions

Check the agent status for the Nodes:

  1. Click the Tiers & Nodes option on the left menu.
  2. Click Grid View.
  3. Verify that the App Agent Status for each Node is greater than 90% during the last hour.

Verify Agents Verify Agents

Restart the Application and Load Generation if Needed

If any of the checks you performed in the previous steps could not be verified, SSH into your Application VM and follow these steps to restart the application and transaction load.

Use the following commands to stop the running instance of Apache Tomcat.

cd /usr/local/apache/apache-tomcat-9/bin
./shutdown.sh

Use the command below to check for remaining application JVMs still running.

ps -ef | grep Supercar-Trader

If you find any remaining application JVMs still running, kill the remaining JVMs using the command below.

sudo pkill -f Supercar-Trader

Use the following commands to stop the load generation for the application. Wait until all processes are stopped.

cd /opt/appdynamics/lab-artifacts/phantomjs
./stop_load.sh

Restart the Tomcat server:

cd /usr/local/apache/apache-tomcat-9/bin
./startup.sh

Wait for two minutes and use the following command to ensure Apache Tomcat is running on port 8080.

sudo netstat -tulpn | grep LISTEN

You should see output similar to the following image showing that port 8080 is in use by Apache Tomcat.

Restart App 1 Restart App 1

Use the following commands to start the load generation for the application.

cd /opt/appdynamics/lab-artifacts/phantomjs
./start_load.sh

You should see output similar to the following image.

Restart App 3 Restart App 3

Last Modified Oct 13, 2025

Download Database Agent

2 minutes  

In this exercise you will access your AppDynamics Controller from your web browser and download the Database Visibility agent from there.

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Download the Database Agent

  1. Select the Home tab at the top left of the screen.
  2. Select the Getting Started tab.
  3. Click Getting Started Wizard.

Getting Started Getting Started

  1. Click Databases.

Select Agent Select Agent

Download the Database Agent.

  1. Select MySQL from the Select Database Type dropdown menu.
  2. Accept the defaults for the Controller connection.
  3. Click Click Here to Download.

Download Download

Save the Database Visibility Agent file to your local file system.

Your browser should prompt you to save the agent file to your local file system, similar to the following image(depending on your OS).

Save Save

Last Modified Oct 13, 2025

Install Database Agent

2 minutes  

The AppDynamics Database Agent is a standalone Java program that collects performance metrics about your database instances and database servers. You can deploy the Database Agent on any machine running Java 1.8 or higher. The machine must have network access to the AppDynamics Controller and the database instance that you want to be monitored.

A database agent running on a typical machine with 16 GB of memory can monitor about 25 databases. On larger machines, a database agent can monitor up to 200 databases.

In this exercise you will perform the following tasks:

  • Upload the Database Visibility agent file to your Application VM
  • Unzip the file into a specific directory on the file system
  • Start the Database Visibility agent

Upload Database Agent to The Application VM

By this point you should have received the information regarding the EC2 instance that you will be using for this workshop. Ensure you have the IP address of your EC2 instance, username and password required to ssh into the instance .

On your local machine, open a terminal window and change into the directory where the database agent file was downloaded to. Upload the file into the EC2 instance using the following command. This may take some time to complete. If you are in a Windows OS, you may have to use a programm such as WinSCP.

  • Update the IP address or public DNS for your instance.
  • Update the filename to match your exact version.
cd ~/Downloads
scp -P 2222 db-agent-*.zip splunk@i-0267b13f78f891b64.splunk.show:/home/splunk
splunk@i-0267b13f78f891b64.splunk.show's password:
db-agent-25.7.0.5137.zip                                                                                                                               100%   70MB   5.6MB/s   00:12

Install the Database Agent

Create the directory structure where you will unzip the Database agent zip file.

cd /opt/appdynamics
mkdir dbagent

Use the following commands to copy the Database agent zip file to the directory and unzip the file. The name of your Database agent file may be slightly different than the example below.

cp ~/db-agent-*.zip /opt/appdynamics/dbagent/
cd /opt/appdynamics/dbagent
unzip db-agent-*.zip

Start the Database Visibility agent

Use the following commands to start the Database agent and verify that it started.

Append your inititals to the db agent name, this will be used in the following section. example: DBMon-Lab-Agent-IO

cd /opt/appdynamics/dbagent
nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Lab-Agent-YOURINITIALS -jar db-agent.jar &
ps -ef | grep db-agent

You should see output similar to the following image.

Output Output

Last Modified Oct 13, 2025

Configure Database Collector

2 minutes  

Configure Database Collector

The Database Agent Collector is the process that runs within the Database Agent to collect performance metrics about your database instances and database servers. One collector collects metrics for one database instance. Multiple collectors can run in one Database Agent. Once the Database Agent is connected to the Controller one or more collectors can be configured in the Controller.

In this exercise you will perform the following tasks:

  • Access your AppDynamics Controller from your web browser
  • Configure a Database Collector in the Controller
  • Confirm the Database Collector is collecting data

Login to the Controller

Log into the AppDynamics SE Lab Controller using your Cisco credentials.

Configure a Database Collector in the Controller

Use the following steps to change the settings for the query literals and navigate to the collectors configuration.

  1. Click the Databases tab on the left menu.
  2. Click the Configuration tab on the bottom left.
  3. Uncheck the checkbox for Remove literals from the queries.
  4. Click the Collectors option.

Configuration Configuration

Use the following steps to configure a new Database collector.

  1. Click Add button.
  2. Select MySQL for the database type.
  3. Select DBMon-Lab-Agent for the database agent and enter the following parameters.
  4. Collector Name: Supercar-MySQL-YOURINITIALS
  5. Hostname or IP Address: localhost
  6. Listener Port: 3306

Configuration1 Configuration1

  1. Username: root
  2. Password: Welcome1!

Configuration2 Configuration2

  1. Select the Monitor Operating System checkbox under the Advanced Options
  2. Select Linux as the operating system and enter the following parameters.
  3. SSH Port: 22
  4. Username: splunk
  5. Password: Password Provided by Your Instructor to SSH into the EC2 Instance
  6. Click OK to save the collector.

Advance Options Advance Options

Confirm that the Database Collector is collecting data

Wait for ten minutes to allow the collector to run and submit data, then follow these steps to verify that the database collector is connecting to the database and collecting database metrics.

  1. Click the Databases tab on the left menu
  2. Search for the Collector created in the previous section: Supercar-MySQL-YOURINITIALS
  3. Ensure the status is green and there are no errors shown.
  4. Click the Supercar-MySQL link to drill into the database.

Note: It may take up to 18 minutes from the time you configure your collector to see the Top 10 SQL Wait States and any queries on the Queries tab.

Application Application

Application Application

You can read more about configuring Database Collectors here

Last Modified Oct 13, 2025

Monitor and Troubleshoot - Part 1

2 minutes  

Monitor and Troubleshoot - Part 1

In this exercise you will perform the following tasks:

  • Review the Overall Database and Server Performance Dashboard
  • Review the Main Database Dashboard
  • Review the Reports in the Database Activity Window

Review the Overall Database and Server Performance Dashboard

The Overall Database and Server Performance Dashboard allows you to quickly see the health of each database at a glance.

  1. Filters: Enables you to explore the options to filter by health, load, time in database or type.
  2. Actions: Exports the data on this window in a .csv formatted file.
  3. View Options: Toggles the spark charts on and off.
  4. View: Switches between the card and list view.
  5. Sort: Displays the sorting options.
  6. Supercar-MySQL: Drills into the main database dashboard.

Overall Database and Server Performance Dashboard Overall Database and Server Performance Dashboard

Review the Main Database Dashboard

The main database dashboard shows you key insights for the database including:

  • The health of the server that is running the database.
  • The total number of calls during the specified time period.
  • The number of calls for any point in time.
  • The total time spent executing SQL statements during the specified time period.
  • The top ten query wait states.
  • The average number of connections.
  • The database type or vendor.
  • Explore the features of the dashboard.
  1. Click the health status circle to see details of the server health:
  • Green: server is healthy.
  • Yellow: server with warning-level violations.
  • Red: server with critical-level violations.
  1. The database type or vendor will always be seen here.
  2. Observe the total time spent executing SQL statements during the specified time period.
  3. Observe the total number of executions during the specified time period.
  4. Hover over the time series on the chart to see the detail of the recorded metrics.

Click the orange circle at the top of the data point to view the time comparison report, which shows query run times and wait states 15 minutes before and 15 minutes after the selected time.

  1. Left-click and hold down your mouse button while dragging from left to right to highlight a spike seen in the chart.
  2. Click the configuration button to exclude unwanted wait states from the top ten.
  3. Hover over the labels for each wait state to see a more detailed description.
  4. Observe the average number of active connections actively running a query during the selected time period.

Main Database Dashboard Main Database Dashboard

To view the OS metrics of the DB server for the time period that you have selected:

  1. Scroll to the bottom of the dashboard using the scroll bar on the right
  2. CPU
  3. Memory
  4. Disk IO
  5. Network IO

OS Metrics OS Metrics

Review the Reports in the Database Activity Window

There are up to nine different reports available in Database Visibility on the Database Activity Window. The reports available depend on the database platform being monitored. In this exercise we will review three of the most common reports.

  • Wait State Report
  • Top Activity Report
  • Query Wait State Report

Wait State Report

This report displays time-series data on Wait Events (states) within the database. Each distinct wait is color-coded, and the Y-axis displays time in seconds. This report also displays data in a table and highlights the time spent in each wait state for each SQL statement.

The wait states consuming the most time may point to performance bottlenecks. For example, db file sequential reads may be caused by segment header contention on indexes or by disk contention.

Wait State Wait State

Top Activity Report

This report displays the top time in database SQL statements in a time-series view. This report also displays data in a table and highlights the time spent in the database for each of 10 top SQL statements.

Use this report to see which SQL statements are using the most database time. This helps to determine the impact of specific SQL statements on overall system performance allowing you to focus your tuning efforts on the statements that have the most impact on database performance.

Top Activity Report Top Activity Report

Query Wait State Report

This report displays the wait times for the top (10, 50, 100, 200) queries. This report also displays data in a table and highlights the time each query is spending in different wait states. Use the columns to sort the queries by the different wait states.

You can read more about the Reports in the Database Activity Window here

Query Wait State Report Query Wait State Report

Last Modified Oct 13, 2025

Monitor and Troubleshoot - Part 2

Review the Queries Dashboard

The Queries window displays the SQL statements and stored procedures that consume the most time in the database. You can compare the query weights to other metrics such as SQL wait times to determine SQL that requires tuning.

  1. Queries tab: Displays the queries window.
  2. Top Queries dropdown: Displays the top 5, 10, 100 or 200 queries.
  3. Filter by Wait States: Enables you to choose wait states to filter the Query list.
  4. Group Similar: Groups together queries with the same syntax.
  5. Click on the query that shows the largest Weight (%) used.
  6. View Query Details: Drills into the query details.

Queries Dashboard Queries Dashboard

Review the details of an expensive query

Once you have identified the statements on the Database Queries window that are spending the most amount of time in the database, you can dig down deeper for details that can help you tune those SQL statements. The database instance Query Details window displays details about the query selected on the Database Queries window.

  1. Resource consumption over time: Displays the amount of time the query spent in the database using resources, the number of executions, and the amount of CPU time consumed.
  2. Wait states: The activities that contribute to the time it takes the database to service the selected SQL statement. The wait states consuming the most time may point to performance bottlenecks.
  3. Components Executing Similar Queries: Displays the Nodes that execute queries similar to this query.
  4. Business Transactions Executing Similar Queries: Displays the Java business transactions that execute queries similar to this query.

Expensive Query Details Expensive Query Details

  1. Use the outer scroll bar on the right to scroll down.
  2. Clients: Displays the machines that executed the selected SQL statement and the percentage of the total time required to execute the statement performed by each machine.
  3. Sessions: Session of each database instance usage
  4. Query Active in Database: Displays the schemas that have been accessed by this SQL.
  5. Users: Displays the users that executed this query.
  6. Query Hashcode: Displays the unique ID for the query that allows the database server to more quickly locate this SQL statement in the cache.
  7. Query: Displays the entire syntax of the selected SQL statement. You can click the pencil icon in the top right corner of the Query card to edit the query name so that it is easy to identify.
  8. Execution Plan: Displays the the query execution plan window.

Expensive Query Details2 Expensive Query Details2

Troubleshoot an expensive query

The Database Query Execution Plan window can help you to determine the most efficient execution plan for your queries. Once you’ve discovered a potentially problematic query, you can run the EXPLAIN PLAN statement to check the execution plan that the database created.

A query’s execution plan reveals whether the query is optimizing its use of indexes and executing efficiently. This information is useful for troubleshooting queries that are executing slowly.

  1. Click on the Execution Plan tab
  2. Notice that the join type in the Type column is ALL for each table.
  3. Hover over one of the join types to see the description for the join type.
  4. Examine the entries in the Extras column.
  5. Hover over each of the entries to see the description for the entry.

Troubleshoot Expensive Query Troubleshoot Expensive Query

Let’s investigate the indexes on the table using the Obect Browser next.

  1. Click on the Object Browser option to view details of the schema for the tables
  2. Click the Database option.
  3. Click on the supercars schema to expand the list of tables.
  4. Click on the CARS table to see the details of the table.
  5. You can see that the CAR_ID column is defined as the primary key

Troubleshoot Expensive Query Troubleshoot Expensive Query

  1. Use the outer scroll bar to scroll down the page.
  2. Notice the primary key index defined in the table.

Troubleshoot Car Index Troubleshoot Car Index

  1. Click on the MANUFACTURER table to view its details.
  2. Notice the MANUFACTURER_ID column is not defined as a primary key.
  3. Scroll down the page to see there are no indexes defined for the table.

Troubleshoot Expensive Query Troubleshoot Expensive Query

The MANUFACTURER_ID column needs an index created for it to improve the performance of any queries on the table. If you analyzed a different query the underlying issue may be different but the most common issues shown in this lab come because the queries are either executing a join with the MANUFACTURER table or querying that table directly.

Last Modified Oct 13, 2025

SmartAgent Deployment

2 minutes  

Introduction

AppDynamics Smart Agent is a lightweight, intelligent agent that provides comprehensive monitoring capabilities for your infrastructure. This section covers three different deployment approaches, allowing you to choose the method that best fits your organization’s needs and existing tooling.

AppDynamics AppDynamics

What is Smart Agent?

Smart Agent is AppDynamics’ next-generation monitoring agent that provides:

  • Unified Monitoring: Single agent for infrastructure, applications, and services
  • Lightweight Design: Minimal resource footprint
  • Auto-Discovery: Automatically discovers and monitors applications
  • Native Instrumentation: Deep visibility into application performance
  • Flexible Deployment: Multiple installation and management options

Deployment Approaches

This section covers three distinct approaches to deploying Smart Agent at scale:

1. Remote Installation (smartagentctl)

The most direct approach using the smartagentctl CLI tool to deploy via SSH.

Best for:

  • Quick deployments to a moderate number of hosts
  • Environments without existing CI/CD infrastructure
  • Testing and proof-of-concept scenarios
  • Direct control over deployment process

Key Features:

  • Direct SSH-based deployment
  • Simple YAML configuration
  • No additional tooling required
  • Concurrent execution support

2. Jenkins Automation

Enterprise-grade deployment using Jenkins pipelines for complete lifecycle management.

Best for:

  • Organizations already using Jenkins
  • Complex deployment workflows
  • Environments requiring approval gates
  • Integration with existing CI/CD pipelines

Key Features:

  • Parameterized pipelines
  • Batch processing for thousands of hosts
  • Complete lifecycle management
  • Centralized logging and reporting

3. GitHub Actions Automation

Modern CI/CD approach using GitHub Actions workflows with self-hosted runners.

Best for:

  • Teams using GitHub for version control
  • Cloud-native environments
  • GitOps workflows
  • Distributed teams preferring web-based management

Key Features:

  • 11 specialized workflows
  • Self-hosted runner in your VPC
  • GitHub secrets integration
  • Automatic batching for scalability

Choosing the Right Approach

FactorRemote InstallationJenkinsGitHub Actions
Setup ComplexityLowMediumMedium
ScalabilityGood (100s of hosts)Excellent (1000s)Excellent (1000s)
PrerequisitesSSH access onlyJenkins serverGitHub account
Learning CurveMinimalModerateModerate
Automation LevelManual executionFull automationFull automation
Best Use CaseQuick deploymentsEnterprise CI/CDModern DevOps

Common Features Across All Approaches

Regardless of which deployment method you choose, all approaches provide:

  • βœ… SSH-based deployment to remote hosts
  • βœ… Concurrent execution for faster deployment
  • βœ… Complete lifecycle management (install, start, stop, uninstall)
  • βœ… Configuration management for controller settings
  • βœ… Error handling and logging
  • βœ… Scalability to hundreds or thousands of hosts

Workshop Structure

Each deployment approach has its own dedicated section:

  1. Remote Installation - Direct CLI-based deployment
  2. Jenkins Automation - Pipeline-based enterprise deployment
  3. GitHub Actions - Modern workflow-based deployment

You can follow one or all approaches depending on your needs.

Tip

If you’re new to Smart Agent deployment, we recommend starting with the Remote Installation approach to understand the basics before moving to more automated solutions.

Prerequisites

Before proceeding with any deployment approach, ensure you have:

  • AppDynamics account with controller access
  • Account name and access key
  • Target hosts with SSH access
  • Network connectivity from hosts to AppDynamics Controller
  • Appropriate permissions on target hosts

Next Steps

Choose your preferred deployment approach and proceed to that section:

  • Start Simple: Begin with Remote Installation to learn the fundamentals
  • Scale with Jenkins: Move to Jenkins for enterprise-grade automation
  • Modernize with GitHub: Adopt GitHub Actions for cloud-native workflows

Each section provides complete, hands-on guidance for deploying Smart Agent at scale!

Last Modified Nov 17, 2025

Subsections of SmartAgent Deployment

Remote Installation

2 minutes  

Introduction

This workshop demonstrates how to use the smartagentctl command-line tool to install and manage AppDynamics Smart Agent on multiple remote hosts simultaneously. This approach is ideal for quickly deploying Smart Agent to a fleet of servers using SSH-based remote execution, without the need for additional automation tools like Jenkins or GitHub Actions.

AppDynamics AppDynamics

What You’ll Learn

In this workshop, you’ll learn how to:

  • Configure remote hosts using the remote.yaml file
  • Configure Smart Agent settings using config.ini
  • Deploy Smart Agent to multiple hosts simultaneously via SSH
  • Start and stop agents remotely across your infrastructure
  • Check agent status on all remote hosts
  • Troubleshoot common installation and connectivity issues

Key Features

  • πŸš€ Direct SSH Deployment - No additional automation platform required
  • πŸ”„ Complete Lifecycle Management - Install, start, stop, and uninstall agents
  • πŸ—οΈ Configuration as Code - YAML and INI-based configuration files
  • πŸ” Secure - SSH key-based authentication
  • πŸ“ˆ Concurrent Execution - Configurable concurrency for parallel deployment
  • πŸŽ›οΈ Simple CLI - Easy-to-use smartagentctl command-line interface

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  Remote Installation Architecture                β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚  Control Node (smartagentctl) ──▢ SSH Connection               β”‚
β”‚                                          β”‚                       β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 1 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 2 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 3 (SSH)      β”‚
β”‚                                          └──▢ Host N (SSH)      β”‚
β”‚                                                                  β”‚
β”‚  All hosts send metrics ──▢ AppDynamics Controller             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Workshop Components

This workshop includes:

  1. Prerequisites - Required access, tools, and permissions
  2. Configuration - Setting up config.ini and remote.yaml
  3. Installation & Startup - Deploying and starting Smart Agent on remote hosts
  4. Troubleshooting - Common issues and solutions

Prerequisites

  • Control node with smartagentctl installed
  • SSH access to all remote hosts
  • SSH private key configured for authentication
  • Sudo privileges on the control node
  • Remote hosts with SSH enabled
  • AppDynamics account credentials

Available Commands

The smartagentctl tool supports the following remote operations:

  • start --remote - Install and start Smart Agent on remote hosts
  • stop --remote - Stop Smart Agent on remote hosts
  • status --remote - Check Smart Agent status on remote hosts
  • install --remote - Install Smart Agent without starting
  • uninstall --remote - Remove Smart Agent from remote hosts
  • --service flag - Install as systemd service

All commands support the --verbose flag for detailed logging.

Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▢️) cursor keys on your keyboard
Last Modified Nov 17, 2025

Subsections of Remote Installation

1. Prerequisites

Before you begin installing Smart Agent on remote hosts, ensure you have the following prerequisites in place:

Required Access

  • SSH Access: You must have SSH access to all remote hosts where you plan to install Smart Agent
  • SSH Private Key: A configured SSH private key for authentication
  • Sudo Privileges: The control node user needs sudo privileges to run smartagentctl
  • Remote SSH: Remote hosts must have SSH enabled and accessible

Directory Structure

The Smart Agent installation directory should be set up on your control node:

cd /home/ubuntu/appdsm

The directory contains:

  • smartagentctl - Command-line utility to manage SmartAgent
  • smartagent - The SmartAgent binary
  • config.ini - Main configuration file
  • remote.yaml - Remote hosts configuration file
  • conf/ - Additional configuration files
  • lib/ - Required libraries

AppDynamics Account Information

You’ll need the following information from your AppDynamics account:

  • Controller URL: Your AppDynamics SaaS controller endpoint (e.g., fso-tme.saas.appdynamics.com)
  • Account Name: Your AppDynamics account name
  • Account Access Key: Your AppDynamics account access key
  • Controller Port: Usually 443 for HTTPS connections

Target Host Requirements

Your remote hosts should meet these requirements:

  • Operating System: Ubuntu/Linux-based systems
  • SSH Server: SSH daemon running and accepting connections
  • User Account: User account with appropriate permissions (typically root)
  • Network Access: Ability to reach the AppDynamics Controller
  • Disk Space: Sufficient space for Smart Agent installation (typically under 100MB)

Security Considerations

Before proceeding, review these security best practices:

  1. SSH Keys: Use strong SSH keys (RSA 4096-bit or ED25519)
  2. Access Keys: Store AccountAccessKey securely
  3. Host Key Validation: For production, plan to validate host keys
  4. SSL/TLS: Always use SSL/TLS for controller communication
  5. Log Files: Restrict access to log files containing sensitive information

Verifying Prerequisites

Check SSH Connectivity

Test SSH connectivity to your remote hosts:

ssh -i /home/ubuntu/.ssh/id_rsa ubuntu@<remote-host-ip>

Verify SSH Key Permissions

Ensure proper permissions on your SSH key:

chmod 600 /home/ubuntu/.ssh/id_rsa

Check Network Connectivity

Verify that remote hosts can reach each other and the internet:

ping <remote-host-ip>

Verify Sudo Access

Ensure you have sudo privileges:

sudo -v

If all prerequisites are met, you’re ready to proceed with configuration!

Last Modified Nov 17, 2025

2. Configuration

Smart Agent remote installation requires two key configuration files: config.ini for Smart Agent settings and remote.yaml for defining remote hosts and connection parameters.

Configuration Files Overview

Both configuration files should be located in the Smart Agent installation directory:

cd /home/ubuntu/appdsm

The two files you’ll configure:

  • config.ini - Smart Agent configuration deployed to all remote hosts
  • remote.yaml - Remote hosts and SSH connection settings

config.ini - Smart Agent Configuration

The config.ini file contains the main Smart Agent configuration that will be deployed to all remote hosts.

Location: /home/ubuntu/appdsm/config.ini

Controller Configuration

Configure your AppDynamics Controller connection:

ControllerURL    = fso-tme.saas.appdynamics.com
ControllerPort   = 443
FMServicePort    = 443
AccountAccessKey = your-access-key-here
AccountName      = your-account-name
EnableSSL        = true

Key Parameters:

  • ControllerURL: Your AppDynamics SaaS controller endpoint
  • ControllerPort: HTTPS port for the controller (default: 443)
  • FMServicePort: Flow Monitoring service port
  • AccountAccessKey: Your AppDynamics account access key
  • AccountName: Your AppDynamics account name
  • EnableSSL: Enable SSL/TLS encryption (should be true for production)

Common Configuration

Define the agent’s identity and polling behavior:

[CommonConfig]
AgentName            = smartagent
PollingIntervalInSec = 300
Tags                 = environment:production,region:us-east
ServiceName          = my-application

Parameters:

  • AgentName: Name identifier for the agent
  • PollingIntervalInSec: How often the agent polls for data (in seconds)
  • Tags: Custom tags for categorizing agents (comma-separated)
  • ServiceName: Optional service name for logical grouping

Telemetry Settings

Configure logging and profiling:

[Telemetry]
LogLevel  = DEBUG
LogFile   = /opt/appdynamics/appdsmartagent/log.log
Profiling = false

Parameters:

  • LogLevel: Logging verbosity (DEBUG, INFO, WARN, ERROR)
  • LogFile: Path where logs will be written on remote hosts
  • Profiling: Enable performance profiling (true/false)

TLS Client Settings

Configure proxy and TLS settings:

[TLSClientSetting]
Insecure        = false
AgentHTTPProxy  = 
AgentHTTPSProxy = 
AgentNoProxy    = 

Parameters:

  • Insecure: Skip TLS certificate verification (not recommended for production)
  • AgentHTTPProxy: HTTP proxy server URL (if required)
  • AgentHTTPSProxy: HTTPS proxy server URL (if required)
  • AgentNoProxy: Comma-separated list of hosts to bypass proxy

Auto Discovery

Configure automatic application discovery:

[AutoDiscovery]
RunAutoDiscovery          = false
ExcludeLabels             = process.cpu.usage,process.memory.usage
ExcludeProcesses          = 
ExcludeUsers              = 
AutoDiscoveryTimeInterval = 4h
AutoInstall               = false

Parameters:

  • RunAutoDiscovery: Automatically discover applications (true/false)
  • ExcludeLabels: Metrics to exclude from discovery
  • ExcludeProcesses: Process names to exclude from monitoring
  • ExcludeUsers: User accounts to exclude from monitoring
  • AutoDiscoveryTimeInterval: How often to run discovery (e.g., 4h, 30m)
  • AutoInstall: Automatically install discovered applications

Task Configuration

Configure native instrumentation:

[TaskConfig]
NativeEnable        = true
UserPortalUserName  = 
UserPortalPassword  = 
UserPortalAuth      = none
AutoUpdateLdPreload = true

Parameters:

  • NativeEnable: Enable native instrumentation
  • AutoUpdateLdPreload: Automatically update LD_PRELOAD settings

remote.yaml - Remote Hosts Configuration

The remote.yaml file defines the remote hosts where Smart Agent will be installed and the connection parameters.

Location: /home/ubuntu/appdsm/remote.yaml

Example Configuration

max_concurrency: 4
remote_dir: "/opt/appdynamics/appdsmartagent"
protocol:
  type: ssh
  auth:
    username: ubuntu
    private_key_path: /home/ubuntu/.ssh/id_rsa
    privileged: true
    ignore_host_key_validation: true
    known_hosts_path: /home/ubuntu/.ssh/known_hosts
hosts:
  - host: "172.31.1.243"
    port: 22
    user: root
    group: root
  - host: "172.31.1.48"
    port: 22
    user: root
    group: root
  - host: "172.31.1.142"
    port: 22
    user: root
    group: root
  - host: "172.31.1.5"
    port: 22
    user: root
    group: root

Global Settings

max_concurrency: Maximum number of hosts to process simultaneously

  • Default: 4
  • Increase for faster deployment to many hosts
  • Decrease if experiencing network or resource constraints

remote_dir: Installation directory on remote hosts

  • Default: /opt/appdynamics/appdsmartagent
  • Must be an absolute path
  • User must have write permissions

Protocol Configuration

type: Connection protocol

  • Value: ssh

auth.username: SSH username for authentication

  • Example: ubuntu, ec2-user, centos
  • Must match the user configured on remote hosts

auth.private_key_path: Path to SSH private key

  • Must be an absolute path
  • Key must be accessible and have proper permissions (600)

auth.privileged: Run agent with elevated privileges

  • true: Install as root/systemd service
  • false: Install as a user process
  • Recommended: true for production deployments

auth.ignore_host_key_validation: Skip SSH host key verification

  • true: Skip verification (useful for testing)
  • false: Validate host keys (recommended for production)

auth.known_hosts_path: Path to SSH known_hosts file

  • Default: /home/ubuntu/.ssh/known_hosts
  • Used when host key validation is enabled

Host Definitions

Each host entry requires:

host: IP address or hostname of the remote machine

  • Can be IPv4, IPv6, or hostname
  • Must be reachable from the control node

port: SSH port

  • Default: 22
  • Change if SSH is running on a non-standard port

user: User account that will own the Smart Agent process

  • Typically root for system-wide installation
  • Can be a regular user for user-specific installation

group: Group that will own the Smart Agent process

  • Typically matches the user (e.g., root)

Adding More Hosts

To add additional remote hosts, append to the hosts list:

hosts:
  - host: "10.0.1.10"
    port: 22
    user: root
    group: root
  - host: "10.0.1.11"
    port: 22
    user: root
    group: root
Tip

You can add as many hosts as needed. The max_concurrency setting controls how many are processed in parallel.

Verifying Configuration

Before proceeding with installation, verify your configuration files:

Review remote.yaml

cat /home/ubuntu/appdsm/remote.yaml

Check that:

  • All host IP addresses are correct
  • SSH key path is valid
  • Remote directory path is appropriate

Review config.ini

cat /home/ubuntu/appdsm/config.ini

Verify that:

  • Controller URL and account information are correct
  • Log file paths are valid
  • Settings match your environment requirements

Validate YAML Syntax

Ensure your YAML file is properly formatted:

python3 -c "import yaml; yaml.safe_load(open('/home/ubuntu/appdsm/remote.yaml'))"

If the command completes without errors, your YAML syntax is valid.

Once your configuration files are ready, you can proceed with the installation!

Last Modified Nov 17, 2025

3. Installation & Startup

Now that your configuration files are ready, you can install and start Smart Agent on your remote hosts using the smartagentctl command-line tool.

Installation Process Overview

The installation process involves:

  1. Connection: Establishes SSH connections to all defined hosts
  2. Transfer: Copies Smart Agent binaries and configuration to remote hosts
  3. Installation: Installs Smart Agent in /opt/appdynamics/appdsmartagent/ on each host
  4. Startup: Starts the Smart Agent process on each remote host
  5. Logging: Outputs detailed progress to console and log file

Step 1: Navigate to Installation Directory

Change to the Smart Agent installation directory:

cd /home/ubuntu/appdsm

Step 2: Verify Configuration Files

Before starting the installation, verify your configuration files are properly set up:

Review remote hosts configuration

cat remote.yaml

Ensure all host IP addresses, ports, and SSH settings are correct.

Review agent configuration

cat config.ini

Verify that controller URL, account credentials, and other settings are accurate.

Step 3: Start Smart Agent on Remote Hosts

Run the following command to start Smart Agent on all remote hosts defined in remote.yaml:

sudo ./smartagentctl start --remote --verbose

Command Breakdown

  • sudo: Required for privileged operations
  • ./smartagentctl: The control utility
  • start: Command to start the Smart Agent
  • --remote: Deploy to remote hosts (reads from remote.yaml)
  • --verbose: Enable detailed debug logging
Note

The --verbose flag is highly recommended as it provides detailed output about the installation progress and helps identify any issues.

Step 4: Monitor the Installation

The --verbose flag provides detailed output including:

  • SSH connection status
  • File transfer progress
  • Installation steps on each host
  • Agent startup status
  • Any errors or warnings

Expected Output

You should see output similar to:

Starting Smart Agent deployment to remote hosts...
Connecting to 172.31.1.243:22...
Connection successful: 172.31.1.243
Transferring Smart Agent binaries...
Installing Smart Agent on 172.31.1.243...
Starting Smart Agent on 172.31.1.243...
Smart Agent started successfully on 172.31.1.243

Connecting to 172.31.1.48:22...
...

Step 5: Verify Installation

After the installation completes, verify that Smart Agent is running on the remote hosts.

Check Status Remotely

Use the status command to check all remote hosts:

sudo ./smartagentctl status --remote --verbose

This will query each host and report whether Smart Agent is running.

Check Logs on Control Node

View logs on the control node:

tail -f /home/ubuntu/appdsm/log.log

SSH to Remote Host and Check

You can also SSH to a remote host and check directly:

ssh ubuntu@172.31.1.243
tail -f /opt/appdynamics/appdsmartagent/log.log
ps aux | grep smartagent

Additional Commands

Install Without Starting

To install Smart Agent without starting it:

sudo ./smartagentctl install --remote --verbose

This copies the binaries and configuration but doesn’t start the agent process.

Stop Smart Agent

To stop Smart Agent on all remote hosts:

sudo ./smartagentctl stop --remote --verbose

Install as System Service

To install Smart Agent as a systemd service (recommended for production):

sudo ./smartagentctl start --remote --verbose --service

When installed as a service:

  • Smart Agent will start automatically on system boot
  • Can be managed using systemctl commands
  • Better integration with system logging

Uninstall Smart Agent

To completely remove Smart Agent from remote hosts:

sudo ./smartagentctl uninstall --remote --verbose
Warning

The uninstall command will remove all Smart Agent files from the remote hosts. Make sure you have backups of any important configuration or log files.

Verifying in AppDynamics Controller

After starting Smart Agent on remote hosts:

  1. Log into AppDynamics Controller: Navigate to your controller URL
  2. Go to Servers: Check the Servers section in the Controller UI
  3. Verify Agents: You should see your Smart Agents appear in the list
  4. Check Metrics: Verify that metrics are being collected from each host

Expected Timeline

  • Agent Registration: Agents typically appear in the Controller within 1-2 minutes
  • Initial Metrics: First metrics usually arrive within 5 minutes
  • Full Data: Complete data collection starts after the first polling interval (configured in config.ini)

Log File Locations

Logs are written to both the control node and remote hosts:

LocationPathDescription
Control Node/home/ubuntu/appdsm/log.logInstallation and deployment logs
Remote Hosts/opt/appdynamics/appdsmartagent/log.logAgent runtime logs

Understanding Concurrency

The max_concurrency setting in remote.yaml controls parallel execution:

  • Lower values (1-2): Sequential processing, slower but safer
  • Default (4): Good balance for most environments
  • Higher values (8+): Faster deployment to many hosts, requires more resources

Example: With 12 hosts and max_concurrency: 4:

  • First batch: Hosts 1-4 processed simultaneously
  • Second batch: Hosts 5-8 processed simultaneously
  • Third batch: Hosts 9-12 processed simultaneously

What Happens on Each Remote Host

When you run the start command, the following occurs on each remote host:

  1. Directory Creation: Creates /opt/appdynamics/appdsmartagent/
  2. File Transfer: Copies smartagent binary, config.ini, and libraries
  3. Permission Setting: Sets appropriate file permissions
  4. Process Start: Launches the Smart Agent process
  5. Verification: Confirms the process is running

Next Steps

After successfully installing and starting Smart Agent:

  1. βœ… Verify agents appear in the AppDynamics Controller UI
  2. βœ… Check that metrics are being collected
  3. βœ… Configure application monitoring as needed
  4. βœ… Set up alerts and dashboards
  5. βœ… Monitor agent health and performance

If you encounter any issues, proceed to the Troubleshooting section.

Last Modified Nov 17, 2025

4. Troubleshooting

This section covers common issues you may encounter when deploying Smart Agent to remote hosts and how to resolve them.

SSH Connection Issues

Problem: Cannot Connect to Remote Hosts

Symptoms:

  • Connection timeout errors
  • “Permission denied” messages
  • Host key verification failures

Solutions:

1. Verify SSH Key Permissions

SSH keys must have the correct permissions:

chmod 600 /home/ubuntu/.ssh/id_rsa
chmod 644 /home/ubuntu/.ssh/id_rsa.pub
chmod 700 /home/ubuntu/.ssh

2. Test SSH Connectivity Manually

Test connection to each remote host:

ssh -i /home/ubuntu/.ssh/id_rsa ubuntu@172.31.1.243

If this fails, the issue is with SSH configuration, not smartagentctl.

3. Check Remote Host Reachability

Verify network connectivity:

ping 172.31.1.243
telnet 172.31.1.243 22

4. Verify SSH User

Ensure the username in remote.yaml matches the SSH user:

protocol:
  auth:
    username: ubuntu  # Must match your SSH user

5. Check Known Hosts

If host key validation is enabled, ensure hosts are in known_hosts:

ssh-keyscan 172.31.1.243 >> /home/ubuntu/.ssh/known_hosts

Or temporarily disable host key validation in remote.yaml:

protocol:
  auth:
    ignore_host_key_validation: true
Warning

Disabling host key validation should only be used for testing. Always enable it in production environments.

Permission Issues

Problem: Permission Denied During Installation

Symptoms:

  • “Permission denied” when creating directories
  • Cannot write to /opt/appdynamics/
  • Insufficient privileges errors

Solutions:

1. Verify Sudo Access on Control Node

sudo -v

2. Check Privileged Setting

Ensure privileged: true is set in remote.yaml:

protocol:
  auth:
    privileged: true

3. Verify Remote User Permissions

The remote user must have sudo privileges or be root. Test on remote host:

ssh ubuntu@172.31.1.243
sudo mkdir -p /opt/appdynamics/test
sudo rm -rf /opt/appdynamics/test

4. Check Remote Directory Permissions

If using a custom installation directory, ensure it’s writable:

ssh ubuntu@172.31.1.243
ls -la /opt/appdynamics/

Agent Not Starting

Problem: Agent Installation Succeeds But Agent Doesn’t Start

Symptoms:

  • Installation completes without errors
  • Agent process not running on remote hosts
  • No errors in control node logs

Solutions:

1. Check Remote Host Logs

SSH to the remote host and check the agent logs:

ssh ubuntu@172.31.1.243
tail -100 /opt/appdynamics/appdsmartagent/log.log

Look for error messages indicating:

  • Configuration errors
  • Network connectivity issues
  • Missing dependencies

2. Verify Agent Process

Check if the agent process is running:

ssh ubuntu@172.31.1.243
ps aux | grep smartagent

If not running, try starting manually:

ssh ubuntu@172.31.1.243
cd /opt/appdynamics/appdsmartagent
sudo ./smartagent

3. Check Configuration Files

Verify that config.ini was properly transferred:

ssh ubuntu@172.31.1.243
cat /opt/appdynamics/appdsmartagent/config.ini

Ensure:

  • Controller URL is correct
  • Account credentials are valid
  • All required fields are populated

4. Test Controller Connectivity

From the remote host, verify connectivity to the AppDynamics Controller:

ssh ubuntu@172.31.1.243
curl -I https://fso-tme.saas.appdynamics.com

5. Check System Resources

Ensure the remote host has adequate resources:

ssh ubuntu@172.31.1.243
df -h  # Check disk space
free -m  # Check memory

Configuration Errors

Problem: Invalid Configuration

Symptoms:

  • YAML parsing errors
  • Invalid configuration parameter errors
  • Agent fails to start with config errors

Solutions:

1. Validate YAML Syntax

Check for YAML syntax errors in remote.yaml:

python3 -c "import yaml; yaml.safe_load(open('/home/ubuntu/appdsm/remote.yaml'))"

Common YAML issues:

  • Incorrect indentation (use spaces, not tabs)
  • Missing colons
  • Unquoted special characters

2. Verify INI File Format

Check config.ini for syntax errors:

cat /home/ubuntu/appdsm/config.ini

Common INI issues:

  • Missing section headers (e.g., [CommonConfig])
  • Invalid parameter names
  • Missing equals signs

3. Validate Controller Credentials

Ensure your AppDynamics credentials are correct:

  • ControllerURL: Should not include https:// or /controller
  • AccountAccessKey: Should be the full access key
  • AccountName: Should match your account name exactly

Example correct format:

ControllerURL    = fso-tme.saas.appdynamics.com
AccountAccessKey = abc123xyz789
AccountName      = fso-tme

Agent Not Appearing in Controller

Problem: Agent Starts But Doesn’t Appear in Controller UI

Symptoms:

  • Agent process is running on remote hosts
  • No errors in agent logs
  • Agent doesn’t appear in Controller UI

Solutions:

1. Wait for Initial Registration

Agents may take 1-5 minutes to appear in the Controller after starting.

2. Verify Controller Configuration

Check that the agent can reach the controller:

ssh ubuntu@172.31.1.243
ping fso-tme.saas.appdynamics.com
curl -I https://fso-tme.saas.appdynamics.com

3. Check Agent Logs for Connection Errors

Look for controller connection errors:

ssh ubuntu@172.31.1.243
grep -i "error\|fail\|controller" /opt/appdynamics/appdsmartagent/log.log

4. Verify SSL/TLS Settings

Ensure SSL is enabled in config.ini:

EnableSSL = true

5. Check Firewall Rules

Verify that outbound HTTPS (port 443) is allowed from remote hosts to the Controller.

6. Verify Account Credentials

Double-check that your AccountAccessKey and AccountName are correct in the Controller UI:

  • Log into AppDynamics Controller
  • Go to Settings β†’ License
  • Verify your account name and access key

Performance and Scaling Issues

Problem: Slow Deployment or Timeouts

Symptoms:

  • Deployment takes too long
  • Timeout errors when deploying to many hosts
  • System resource exhaustion

Solutions:

1. Adjust Concurrency

Reduce max_concurrency in remote.yaml if experiencing issues:

max_concurrency: 2  # Lower value for slower, more stable deployment

Or increase for faster deployment if resources allow:

max_concurrency: 8  # Higher value for faster deployment

2. Deploy in Batches

For very large deployments, split hosts into multiple groups:

remote-batch1.yaml:

hosts:
  - host: "172.31.1.1"
  - host: "172.31.1.2"
  - host: "172.31.1.3"

remote-batch2.yaml:

hosts:
  - host: "172.31.1.4"
  - host: "172.31.1.5"
  - host: "172.31.1.6"

Then deploy each batch separately.

3. Check Network Bandwidth

Monitor network usage during deployment:

iftop

If bandwidth is saturated, reduce concurrency or deploy during off-peak hours.

Log Analysis

Checking Control Node Logs

View detailed deployment logs:

tail -f /home/ubuntu/appdsm/log.log

Look for:

  • SSH connection failures
  • File transfer errors
  • Permission denied errors
  • Timeout messages

Checking Remote Host Logs

View agent runtime logs on remote hosts:

ssh ubuntu@172.31.1.243
tail -f /opt/appdynamics/appdsmartagent/log.log

Look for:

  • Controller connection errors
  • Configuration errors
  • Agent startup failures
  • Network issues

Increasing Log Verbosity

For more detailed logging, set LogLevel to DEBUG in config.ini:

[Telemetry]
LogLevel = DEBUG

Getting Help

If you’re still experiencing issues:

  1. Check Documentation: Review the smartagentctl help:

    ./smartagentctl --help
    ./smartagentctl start --help
  2. Review AppDynamics Documentation: Visit the AppDynamics documentation portal

  3. Check Log Files: Always review both control node and remote host logs

  4. Test Components Individually:

    • Test SSH connectivity separately
    • Test agent startup on a single host manually
    • Verify controller connectivity independently
  5. Collect Diagnostic Information:

    • Control node logs
    • Remote host logs
    • Configuration files (with sensitive data redacted)
    • Error messages and stack traces

Common Error Messages

Error MessageCauseSolution
“Permission denied (publickey)”SSH key authentication failureVerify SSH key path and permissions
“Connection refused”SSH port not accessibleCheck firewall rules and SSH daemon
“No such file or directory”Missing configuration fileVerify config files exist and paths are correct
“YAML parse error”Invalid YAML syntaxValidate YAML syntax with parser
“Controller unreachable”Network connectivity issueTest controller connectivity from remote host
“Invalid credentials”Wrong account key or nameVerify AppDynamics credentials

Best Practices for Troubleshooting

  1. Always use –verbose flag: Provides detailed output for debugging
  2. Test with a single host first: Deploy to one host before scaling
  3. Check logs immediately: Review logs right after deployment
  4. Verify prerequisites: Ensure all requirements are met before deploying
  5. Test connectivity separately: Verify SSH and network connectivity independently
  6. Use manual commands: Test manual SSH and agent startup to isolate issues
Tip

When troubleshooting, start with the simplest tests first (e.g., ping, SSH connectivity) before moving to more complex issues.

Last Modified Nov 17, 2025

Jenkins Automation

2 minutes  

Introduction

This workshop demonstrates how to use Jenkins to automate the deployment and lifecycle management of AppDynamics Smart Agent across multiple EC2 instances. Whether you’re managing 10 hosts or 10,000, this guide shows you how to leverage Jenkins pipelines for scalable, secure, and repeatable Smart Agent operations.

Jenkins and AppDynamics Jenkins and AppDynamics AppDynamics AppDynamics

What You’ll Learn

In this workshop, you’ll learn how to:

  • Deploy Smart Agent to multiple hosts simultaneously using Jenkins
  • Configure Jenkins credentials for secure SSH and AppDynamics access
  • Create parameterized pipelines for flexible deployment scenarios
  • Implement batch processing to scale to thousands of hosts
  • Manage the complete agent lifecycle - install, configure, stop, and cleanup
  • Handle failures gracefully with automatic error tracking and reporting

Key Features

  • πŸš€ Parallel Deployment - Deploy to multiple hosts simultaneously
  • πŸ”„ Complete Lifecycle Management - Install, uninstall, stop, and clean agents
  • πŸ—οΈ Infrastructure as Code - All pipelines version-controlled
  • πŸ” Secure - SSH key-based authentication via Jenkins credentials
  • πŸ“ˆ Massively Scalable - Deploy to thousands of hosts with automatic batching
  • πŸŽ›οΈ Jenkins Agent - Executes within your AWS VPC

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Jenkins-based Deployment                      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚  Developer ──▢ Jenkins Master ──▢ Jenkins Agent (AWS VPC)      β”‚
β”‚                                          β”‚                       β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 1 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 2 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 3 (SSH)      β”‚
β”‚                                          └──▢ Host N (SSH)      β”‚
β”‚                                                                  β”‚
β”‚  All hosts send metrics ──▢ AppDynamics Controller             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Workshop Components

This workshop includes:

  1. Architecture & Design - Understanding the system design and network topology
  2. Jenkins Setup - Configuring Jenkins, credentials, and agents
  3. Pipeline Creation - Creating and configuring deployment pipelines
  4. Deployment Workflow - Executing deployments and verifying installations

Prerequisites

  • Jenkins server (2.300+) with Pipeline plugin
  • Jenkins agent in the same VPC as target EC2 instances
  • SSH key pair for authentication
  • AppDynamics Smart Agent package
  • Target Ubuntu EC2 instances with SSH access

GitHub Repository

All pipeline code and configuration files are available in the GitHub repository:

https://github.com/chambear2809/sm-jenkins

The repository includes:

  • Complete Jenkinsfile pipeline definitions
  • Detailed setup documentation
  • Configuration examples
  • Troubleshooting guides
Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▢️) cursor keys on your keyboard
Last Modified Nov 17, 2025

Subsections of Jenkins Automation

Architecture & Design

5 minutes  

System Architecture

The Jenkins-based Smart Agent deployment system uses a hub-and-spoke architecture where a Jenkins agent in your AWS VPC orchestrates deployments to multiple target hosts via SSH.

High-Level Architecture

graph TB
    subgraph "Jenkins Infrastructure"
        JM[Jenkins Master<br/>Web UI + Orchestration]
        JA[Jenkins Agent<br/>EC2 in VPC<br/>Label: linux]
    end
    
    subgraph "AWS VPC - Private Network"
        subgraph "Target EC2 Instances"
            H1[Host 1<br/>172.31.1.243]
            H2[Host 2<br/>172.31.1.48]
            H3[Host 3<br/>172.31.1.5]
            HN[Host N<br/>172.31.x.x]
        end
    end
    
    DEV[Developer/Operator]
    APPD[AppDynamics<br/>Controller]
    
    DEV -->|1. Triggers Pipeline| JM
    JM -->|2. Assigns Job| JA
    JA -->|3. SSH Deploy<br/>Private IPs| H1
    JA -->|3. SSH Deploy<br/>Private IPs| H2
    JA -->|3. SSH Deploy<br/>Private IPs| H3
    JA -->|3. SSH Deploy<br/>Private IPs| HN
    
    H1 -.->|Metrics| APPD
    H2 -.->|Metrics| APPD
    H3 -.->|Metrics| APPD
    HN -.->|Metrics| APPD
    
    style JM fill:#d4e6f1
    style JA fill:#a9cce3
    style H1 fill:#aed6f1
    style H2 fill:#aed6f1
    style H3 fill:#aed6f1
    style HN fill:#aed6f1

Network Architecture

All infrastructure runs in a single AWS VPC with a shared security group. The Jenkins agent communicates with target hosts via private IPs, eliminating the need for public IP addresses on target hosts.

VPC Layout

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        AWS VPC (10.0.0.0/16)                    β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚              Security Group: app-agents-sg                β”‚  β”‚
β”‚  β”‚  Rules:                                                   β”‚  β”‚
β”‚  β”‚  - Inbound: SSH (22) from Jenkins Agent only             β”‚  β”‚
β”‚  β”‚  - Outbound: HTTPS (443) to AppDynamics Controller       β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚  β”‚ Jenkins Agentβ”‚    β”‚  Target EC2  β”‚    β”‚  Target EC2  β”‚      β”‚
β”‚  β”‚              β”‚    β”‚              β”‚    β”‚              β”‚      β”‚
β”‚  β”‚ Private IP:  │───▢│ Private IP:  β”‚    β”‚ Private IP:  β”‚      β”‚
β”‚  β”‚ 172.31.50.10 β”‚SSH β”‚ 172.31.1.243 β”‚    β”‚ 172.31.1.48  β”‚      β”‚
β”‚  β”‚              │───▢│              β”‚    β”‚              β”‚      β”‚
β”‚  β”‚ Label: linux β”‚    β”‚ Ubuntu 20.04 β”‚    β”‚ Ubuntu 20.04 β”‚      β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚         β”‚                    β”‚                    β”‚             β”‚
β”‚         β”‚                    β”‚                    β”‚             β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚                              β”‚                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   AppDynamics    β”‚
                    β”‚    Controller    β”‚
                    β”‚  (SaaS/On-Prem)  β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Deployment Flow

Complete Deployment Sequence

sequenceDiagram
    participant Dev as Developer
    participant JM as Jenkins Master
    participant JA as Jenkins Agent<br/>(VPC)
    participant TH as Target Hosts<br/>(VPC)
    participant AppD as AppDynamics<br/>Controller
    
    Dev->>JM: 1. Trigger Pipeline
    JM->>JM: 2. Load Credentials
    JM->>JA: 3. Schedule Job
    JA->>JA: 4. Prepare Batches
    
    loop For Each Batch
        JA->>TH: 5. SSH Copy Files (SCP)
        JA->>TH: 6. SSH Execute Commands
        TH->>TH: 7. Install/Config Agent
        TH-->>JA: 8. Return Status
    end
    
    JA->>JM: 9. Report Results
    JM->>Dev: 10. Show Build Status
    
    TH->>AppD: 11. Send Metrics (Post-Install)
    AppD-->>Dev: 12. View Monitoring Data

Component Details

Jenkins Master

Responsibilities:

  • Web UI for users
  • Pipeline orchestration
  • Credential management
  • Build history & logs
  • Job scheduling

Requirements:

  • Jenkins 2.300+
  • Plugins: Pipeline, SSH Agent, Credentials, Git
  • Network access to agent

Jenkins Agent

Location:

  • AWS VPC (same as targets)
  • Private network access

Responsibilities:

  • Execute pipeline stages
  • SSH to target hosts
  • File transfers (SCP)
  • Batching logic
  • Error collection

Requirements:

  • Label: linux
  • Java 11+
  • SSH client
  • Network: SSH to all targets
  • Disk: ~20GB for artifacts

Target Hosts

Pre-requisites:

  • Ubuntu 20.04+
  • SSH server running
  • User with sudo access
  • Authorized SSH key

Post-deployment:

/opt/appdynamics/
└── appdsmartagent/
    β”œβ”€β”€ smartagentctl
    β”œβ”€β”€ config.ini
    └── agents/
        β”œβ”€β”€ machine/
        β”œβ”€β”€ java/
        β”œβ”€β”€ node/
        └── db/

Security Architecture

Security Layers

  1. AWS VPC Isolation

    • Private subnet for agents
    • No direct internet access required
    • VPC flow logs enabled
  2. Security Groups

    • Whitelist Jenkins Agent IP
    • Port 22 (SSH) only
    • Stateful firewall rules
  3. SSH Key Authentication

    • No password authentication
    • Keys stored in Jenkins credentials
    • Temporary key files (600 perms)
    • Keys removed after each build
  4. Jenkins RBAC

    • Role-based access control
    • Pipeline-level permissions
    • Credential access restrictions
    • Audit logging enabled
  5. Secrets Management

    • No secrets in code/logs
    • Credentials binding only
    • Environment variable masking
    • Automatic secret rotation (optional)

Credential Flow

flowchart LR
    subgraph "Jenkins Master"
        CS[Credentials Store<br/>Encrypted at Rest]
        JM[Jenkins Master]
    end
    
    subgraph "Jenkins Agent"
        WS[Workspace<br/>Temp Files]
        KEY[SSH Key File<br/>600 permissions]
    end
    
    subgraph "Target Hosts"
        TH[EC2 Instances<br/>Authorized Keys]
    end
    
    CS -->|Binding| JM
    JM -->|Secure Copy| KEY
    KEY -->|SSH Auth| TH
    WS -.->|Cleanup| X[Deleted]
    KEY -.->|Cleanup| X
    
    style CS fill:#fdeaa8
    style KEY fill:#fadbd8
    style X fill:#e8e8e8

Batch Processing

The system uses automatic batching to support deployments at any scale. By default, hosts are processed in batches of 256, with all hosts within a batch deploying in parallel.

How Batching Works

HOST LIST (1000 hosts)              BATCH_SIZE = 256

Host 001: 172.31.1.1                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
Host 002: 172.31.1.2      ────────▢ β”‚   BATCH 1        β”‚
    ...                              β”‚   Hosts 1-256    β”‚ ───┐
Host 256: 172.31.1.256               β”‚   Sequential     β”‚    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
Host 257: 172.31.1.257               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
Host 258: 172.31.1.258   ────────▢  β”‚   BATCH 2        β”‚    β”‚ SEQUENTIAL
    ...                              β”‚   Hosts 257-512  β”‚    β”‚ EXECUTION
Host 512: 172.31.1.512               β”‚   Sequential     β”‚    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
Host 513: 172.31.1.513               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
    ...                              β”‚   BATCH 3        β”‚    β”‚
Host 768: 172.31.1.768   ────────▢  β”‚   Hosts 513-768  β”‚ β”€β”€β”€β”˜
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Host 769: 172.31.1.769               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    ...                              β”‚   BATCH 4        β”‚
Host 1000: 172.31.2.232  ────────▢  β”‚   Hosts 769-1000 β”‚
                                     β”‚   (232 hosts)    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

WITHIN EACH BATCH:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  All hosts deploy in PARALLEL          β”‚
β”‚                                        β”‚
β”‚  Host 1 ──┐                           β”‚
β”‚  Host 2 ───                           β”‚
β”‚  Host 3 ──┼─▢ Background processes (&)β”‚
β”‚    ...    β”‚                           β”‚
β”‚  Host 256β”€β”˜   └─▢ wait command        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Scaling Characteristics

Deployment Speed (default BATCH_SIZE=256):

  • 10 hosts β†’ 1 batch β†’ ~2 minutes
  • 100 hosts β†’ 1 batch β†’ ~3 minutes
  • 500 hosts β†’ 2 batches β†’ ~6 minutes
  • 1,000 hosts β†’ 4 batches β†’ ~12 minutes
  • 5,000 hosts β†’ 20 batches β†’ ~60 minutes

Factors affecting speed:

  • Network bandwidth (19MB package per host)
  • SSH connection overhead (~1s per host)
  • Target host CPU/disk speed
  • Jenkins agent resources

Next Steps

Now that you understand the architecture, let’s move on to setting up Jenkins and configuring credentials.

Last Modified Nov 17, 2025

Jenkins Setup

10 minutes  

Prerequisites

Before you begin, ensure you have:

  • Jenkins server (version 2.300 or later)
  • A Jenkins agent in the same AWS VPC as your target EC2 instances
  • SSH key pair for authentication to target hosts
  • AppDynamics Smart Agent package
  • Target Ubuntu EC2 instances with SSH access

Required Jenkins Plugins

Install these plugins via Manage Jenkins β†’ Plugins β†’ Available Plugins:

  1. Pipeline (core plugin, usually pre-installed)
  2. SSH Agent Plugin
  3. Credentials Plugin (usually pre-installed)
  4. Git Plugin (if using SCM)

To install:

  1. Navigate to Manage Jenkins β†’ Plugins
  2. Click Available tab
  3. Search for each plugin
  4. Select and click Install

Configure Jenkins Agent

Your Jenkins agent must be able to reach target EC2 instances via private IPs. There are two main options:

Option A: EC2 Instance as Agent

  1. Launch EC2 instance in same VPC as your target hosts

  2. Install Java (required by Jenkins):

    sudo apt-get update
    sudo apt-get install -y openjdk-11-jdk
  3. Add agent in Jenkins:

    • Go to Manage Jenkins β†’ Nodes β†’ New Node
    • Name: aws-vpc-agent (or your preferred name)
    • Type: Permanent Agent
    • Configure:
      • Remote root directory: /home/ubuntu/jenkins
      • Labels: linux (must match pipeline agent label)
      • Launch method: Launch agent via SSH
      • Host: EC2 private IP
      • Credentials: Add SSH credentials for agent

Option B: Use Existing Linux Agent

  • Ensure agent has label linux
  • Verify network connectivity to target hosts
  • Confirm SSH client is installed

Configure Agent Labels

Warning

All pipelines in this workshop use the linux label. Make sure your agent is configured with this label.

To set or modify labels:

  1. Go to Manage Jenkins β†’ Nodes
  2. Click on your agent
  3. Click Configure
  4. Set Labels to linux
  5. Click Save

Credentials Setup

Navigate to: Manage Jenkins β†’ Credentials β†’ System β†’ Global credentials (unrestricted)

You’ll need to create three credentials for the pipelines to work.

1. SSH Private Key for Target Hosts

This credential allows Jenkins to SSH into your target EC2 instances.

Type: SSH Username with private key

  • ID: ssh-private-key (must match exactly)
  • Description: SSH key for EC2 target hosts
  • Username: ubuntu (or your SSH user)
  • Private Key: Choose one:
    • Enter directly: Paste your PEM file content
    • From file: Upload PEM file
    • From Jenkins master: Specify path

Example format:

-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA...
...
-----END RSA PRIVATE KEY-----

2. Deployment Hosts List

This credential contains the list of all target hosts where Smart Agent should be deployed.

Type: Secret text

  • ID: deployment-hosts (must match exactly)
  • Description: List of target EC2 host IPs
  • Secret: Enter newline-separated IPs

Example:

172.31.1.243
172.31.1.48
172.31.1.5
172.31.10.20
172.31.10.21
Important

Format Requirements:

  • One IP per line
  • No commas
  • No spaces
  • No extra characters
  • Use Unix line endings (LF, not CRLF)

3. AppDynamics Account Access Key

This credential contains your AppDynamics account access key for Smart Agent authentication.

Type: Secret text

  • ID: account-access-key (must match exactly)
  • Description: AppDynamics account access key
  • Secret: Your AppDynamics access key

Example: abcd1234-ef56-7890-gh12-ijklmnopqrst

Tip

You can find your AppDynamics access key in the Controller under Settings β†’ License β†’ Account.

Credential Security Best Practices

Follow these best practices for credential management:

  • βœ… Use Jenkins credential encryption (built-in)
  • βœ… Restrict access via Jenkins role-based authorization
  • βœ… Rotate SSH keys periodically
  • βœ… Use least-privilege IAM roles for EC2 instances
  • βœ… Enable audit logging for credential access
  • βœ… Never commit credentials to version control

Smart Agent Package Setup

The Smart Agent ZIP file should be placed in a location accessible to Jenkins. The recommended approach is to store it in the Jenkins home directory.

Download Smart Agent

# Download from AppDynamics
curl -o appdsmartagent_64_linux.zip \
  "https://download.appdynamics.com/download/prox/download-file/smart-agent/latest/appdsmartagent_64_linux.zip"

# Verify the download
ls -lh appdsmartagent_64_linux.zip

Storage Location

The pipelines reference the Smart Agent ZIP at: /var/jenkins_home/smartagent/appdsmartagent.zip

You can either:

  1. Place the ZIP at this exact location
  2. Modify the SMARTAGENT_ZIP_PATH pipeline parameter to point to your ZIP location

Verify Configuration

Before proceeding to pipeline creation, verify your setup:

1. Check Agent Status

  1. Go to Manage Jenkins β†’ Nodes
  2. Verify your agent shows as “online”
  3. Confirm label is set to linux

2. Test SSH Connectivity

Create a simple test pipeline to verify SSH works:

pipeline {
    agent { label 'linux' }
    stages {
        stage('Test SSH') {
            steps {
                withCredentials([
                    sshUserPrivateKey(credentialsId: 'ssh-private-key', 
                                     keyFileVariable: 'SSH_KEY', 
                                     usernameVariable: 'SSH_USER'),
                    string(credentialsId: 'deployment-hosts', variable: 'HOSTS')
                ]) {
                    sh '''
                        echo "Testing SSH credentials..."
                        echo "$HOSTS" | head -1 | while read HOST; do
                            ssh -i $SSH_KEY \
                                -o StrictHostKeyChecking=no \
                                -o ConnectTimeout=10 \
                                $SSH_USER@$HOST \
                                "echo 'Connection successful'"
                        done
                    '''
                }
            }
        }
    }
}

3. Verify Credentials Exist

  1. Go to Manage Jenkins β†’ Credentials
  2. Confirm all three credentials are listed:
    • ssh-private-key
    • deployment-hosts
    • account-access-key

Troubleshooting Common Issues

Agent Not Available

Symptom: “No agent available” error when running pipelines

Solution:

  • Check: Manage Jenkins β†’ Nodes
  • Ensure agent is online
  • Verify agent has linux label
  • Test agent connectivity

SSH Connection Failures

Symptom: Cannot connect to target hosts via SSH

Solution:

# Test from Jenkins agent machine
ssh -i /path/to/key ubuntu@172.31.1.243 -o ConnectTimeout=10

# Check security group allows SSH from agent
# Verify private key matches public key on target

Credential Not Found

Symptom: “Credential not found” error

Solution:

  • Verify credential IDs exactly match:
    • ssh-private-key
    • deployment-hosts
    • account-access-key
  • Check credential scope is set to Global

Permission Denied on Target Hosts

Symptom: SSH succeeds but commands fail with permission denied

Solution:

# On target host, verify user is in sudoers
sudo visudo
# Add line:
ubuntu ALL=(ALL) NOPASSWD: ALL

Next Steps

Now that Jenkins is configured with credentials and agents, you’re ready to create the deployment pipelines!

Last Modified Nov 17, 2025

Pipeline Creation

10 minutes  

Overview

The GitHub repository contains four main pipelines for managing the Smart Agent lifecycle:

  1. Deploy Smart Agent - Installs and starts Smart Agent service
  2. Install Machine Agent - Installs Machine Agent via smartagentctl
  3. Install Database Agent - Installs Database Agent via smartagentctl
  4. Cleanup All Agents - Removes /opt/appdynamics directory

All pipeline code is available in the sm-jenkins GitHub repository.

Pipeline Files

The repository contains these Jenkinsfile pipeline definitions:

sm-jenkins/
└── pipelines/
    β”œβ”€β”€ Jenkinsfile.deploy                  # Deploy Smart Agent
    β”œβ”€β”€ Jenkinsfile.install-machine-agent  # Install Machine Agent
    β”œβ”€β”€ Jenkinsfile.install-db-agent       # Install Database Agent
    └── Jenkinsfile.cleanup                # Cleanup All Agents

Creating Pipelines in Jenkins

For each Jenkinsfile you want to use, follow these steps to create a pipeline in Jenkins.

This method keeps your pipeline code in version control and automatically syncs changes.

Step 1: Fork or Clone the Repository

First, fork the repository to your own GitHub account or organization, or use the original repository directly.

Repository URL: https://github.com/chambear2809/sm-jenkins

Step 2: Create Pipeline in Jenkins

  1. Go to Jenkins Dashboard
  2. Click New Item
  3. Enter item name (e.g., Deploy-Smart-Agent)
  4. Select Pipeline
  5. Click OK

Step 3: Configure Pipeline

In the pipeline configuration page:

General Section:

  • Description: Deploys AppDynamics Smart Agent to multiple hosts
  • Leave Discard old builds unchecked (or configure as desired)

Build Triggers:

  • Leave unchecked for manual-only execution
  • Or configure webhook/polling if desired

Pipeline Section:

  • Definition: Select Pipeline script from SCM
  • SCM: Select Git
  • Repository URL: https://github.com/chambear2809/sm-jenkins
  • Credentials: Add if using a private repository
  • Branch Specifier: */main (or */master)
  • Script Path: pipelines/Jenkinsfile.deploy

Step 4: Save

Click Save to create the pipeline.

Step 5: Repeat for Other Pipelines

Repeat steps 2-4 for each pipeline you want to create, using the appropriate script path:

Pipeline NameScript Path
Deploy-Smart-Agentpipelines/Jenkinsfile.deploy
Install-Machine-Agentpipelines/Jenkinsfile.install-machine-agent
Install-Database-Agentpipelines/Jenkinsfile.install-db-agent
Cleanup-All-Agentspipelines/Jenkinsfile.cleanup

Method 2: Direct Pipeline Script

Alternatively, you can copy the Jenkinsfile content directly into Jenkins.

  1. Create New Item (same as Method 1)
  2. In Pipeline section:
    • Definition: Select Pipeline script
    • Script: Copy/paste the entire Jenkinsfile content from GitHub
  3. Save
Tip

Method 1 (SCM) is recommended because it keeps your pipelines in version control and makes updates easier.

Pipeline Parameters

Each pipeline accepts configurable parameters. Here are the key parameters for the main deployment pipeline:

Deploy Smart Agent Pipeline Parameters

ParameterDefaultDescription
SMARTAGENT_ZIP_PATH/var/jenkins_home/smartagent/appdsmartagent.zipPath to Smart Agent ZIP on Jenkins server
REMOTE_INSTALL_DIR/opt/appdynamics/appdsmartagentInstallation directory on target hosts
APPD_USERubuntuUser to run Smart Agent process
APPD_GROUPubuntuGroup to run Smart Agent process
SSH_PORT22SSH port for remote hosts
AGENT_NAMEsmartagentSmart Agent name
LOG_LEVELDEBUGLog level (DEBUG, INFO, WARN, ERROR)

Cleanup Pipeline Parameters

ParameterDefaultDescription
REMOTE_INSTALL_DIR/opt/appdynamics/appdsmartagentDirectory to remove
SSH_PORT22SSH port for remote hosts
CONFIRM_CLEANUPfalseMust be checked to proceed with cleanup
Warning

The cleanup pipeline includes a confirmation parameter to prevent accidental deletion. You must check CONFIRM_CLEANUP to execute the cleanup.

Understanding Pipeline Structure

Let’s examine the key components of the deployment pipeline:

1. Agent Declaration

agent { label 'linux' }

This ensures the pipeline runs on a Jenkins agent with the linux label.

2. Parameters Block

parameters {
    string(name: 'SMARTAGENT_ZIP_PATH', ...)
    string(name: 'REMOTE_INSTALL_DIR', ...)
    // ... more parameters
}

Defines configurable parameters that can be set when triggering the build.

3. Stages

The deployment pipeline has these stages:

  1. Preparation - Loads target hosts from credentials
  2. Extract Smart Agent - Extracts ZIP file to staging directory
  3. Configure Smart Agent - Creates config.ini template
  4. Deploy to Remote Hosts - Copies files and starts Smart Agent on each host
  5. Verify Installation - Checks Smart Agent status on all hosts

4. Credentials Binding

withCredentials([
    sshUserPrivateKey(credentialsId: 'ssh-private-key', ...),
    string(credentialsId: 'account-access-key', ...)
]) {
    // Pipeline code with access to credentials
}

Securely loads credentials without exposing them in logs.

5. Post Actions

post {
    success { ... }
    failure { ... }
    always { ... }
}

Defines actions to run after pipeline completion, regardless of success or failure.

Pipeline Naming Convention

For clarity and organization, use a consistent naming convention:

Recommended names:

01-Deploy-Smart-Agent
02-Install-Machine-Agent
03-Install-Database-Agent
04-Cleanup-All-Agents

The numeric prefix helps maintain logical ordering in the Jenkins dashboard.

Organizing Pipelines with Folders

For better organization, you can use Jenkins folders:

  1. Create Folder:

    • Click New Item
    • Enter name: AppDynamics Smart Agent
    • Select Folder
    • Click OK
  2. Create Pipelines in Folder:

    • Enter the folder
    • Create pipelines as described above

Example structure:

AppDynamics Smart Agent/
β”œβ”€β”€ Deployment/
β”‚   └── 01-Deploy-Smart-Agent
β”œβ”€β”€ Agent Installation/
β”‚   β”œβ”€β”€ 02-Install-Machine-Agent
β”‚   └── 03-Install-Database-Agent
└── Cleanup/
    └── 04-Cleanup-All-Agents

Viewing Pipeline Code

You can view the complete pipeline code in the GitHub repository:

Main deployment pipeline: https://github.com/chambear2809/sm-jenkins/blob/main/pipelines/Jenkinsfile.deploy

Other pipelines:

Testing Pipeline Configuration

Before running a full deployment, test your pipeline configuration:

1. Dry Run with Single Host

  1. Create a test credential deployment-hosts-test with only one IP
  2. Temporarily modify your pipeline to use this credential
  3. Run the pipeline and verify it works on a single host
  4. Once verified, update to use the full host list

2. Check Syntax

Jenkins provides a built-in syntax validator:

  1. Go to your pipeline
  2. Click Pipeline Syntax link
  3. Use the Declarative Directive Generator to validate syntax

Next Steps

With pipelines created, you’re ready to execute your first Smart Agent deployment!

Last Modified Nov 17, 2025

Deployment Workflow

15 minutes  

First Deployment

Now that your pipelines are configured, let’s walk through executing your first Smart Agent deployment.

Step 1: Navigate to Pipeline

  1. Go to Jenkins Dashboard
  2. Click on your Deploy-Smart-Agent pipeline

Step 2: Build with Parameters

  1. Click Build with Parameters in the left sidebar

  2. Review the default parameters:

    • SMARTAGENT_ZIP_PATH: Verify path is correct
    • REMOTE_INSTALL_DIR: /opt/appdynamics/appdsmartagent
    • APPD_USER: ubuntu (or your SSH user)
    • APPD_GROUP: ubuntu
    • SSH_PORT: 22
    • AGENT_NAME: smartagent
    • LOG_LEVEL: DEBUG
  3. Adjust parameters if needed

  4. Click Build

Tip

For your first deployment, consider testing on a single host by creating a separate credential with just one IP address.

Step 3: Monitor Pipeline Execution

After clicking Build, you’ll see:

  1. Build added to queue - Build number appears in Build History
  2. Click build number (e.g., #1) to view details
  3. Click Console Output to view real-time logs

Step 4: Understanding Console Output

The console output shows each stage of the deployment:

Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on aws-vpc-agent in /home/ubuntu/jenkins/workspace/Deploy-Smart-Agent
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Preparation)
[Pipeline] script
[Pipeline] {
Preparing Smart Agent deployment to 3 hosts: 172.31.1.243, 172.31.1.48, 172.31.1.5
...

Key stages you’ll see:

  1. βœ… Preparation - Loads and validates host list
  2. βœ… Extract Smart Agent - Extracts ZIP file
  3. βœ… Configure Smart Agent - Creates config.ini
  4. βœ… Deploy to Remote Hosts - Deploys to each host
  5. βœ… Verify Installation - Checks Smart Agent status

Step 5: Review Results

After completion, you’ll see:

Success:

Smart Agent successfully deployed to all hosts
Finished: SUCCESS

Partial Success:

Deployment completed with some failures
Failed hosts: 172.31.1.48
Finished: UNSTABLE

Failure:

Smart Agent deployment failed. Check logs for details.
Finished: FAILURE

Verifying Installation

After a successful deployment, verify Smart Agent is running on target hosts.

Check Smart Agent Status

SSH into a target host and check the status:

# SSH to target host
ssh ubuntu@172.31.1.243

# Navigate to installation directory
cd /opt/appdynamics/appdsmartagent

# Check Smart Agent status
sudo ./smartagentctl status

Expected output:

Smart Agent is running (PID: 12345)
Service: appdsmartagent.service
Status: active (running)

List Installed Agents

cd /opt/appdynamics/appdsmartagent
sudo ./smartagentctl list

Expected output:

No agents currently installed
(Use install-machine-agent or install-db-agent pipelines to add agents)

Check Logs

cd /opt/appdynamics/appdsmartagent
tail -f log.log

Look for successful connection messages to the AppDynamics controller.

Verify in AppDynamics Controller

  1. Log into your AppDynamics Controller
  2. Navigate to Servers β†’ Servers
  3. Look for your newly deployed hosts
  4. Verify Smart Agent is reporting metrics

Installing Additional Agents

Once Smart Agent is deployed, you can install specific agent types using the other pipelines.

Install Machine Agent

  1. Go to Install-Machine-Agent pipeline
  2. Click Build with Parameters
  3. Review parameters:
    • AGENT_NAME: machine-agent
    • SSH_PORT: 22
  4. Click Build

The pipeline will SSH to each host and execute:

cd /opt/appdynamics/appdsmartagent
sudo ./smartagentctl install --component machine

Install Database Agent

  1. Go to Install-Database-Agent pipeline
  2. Click Build with Parameters
  3. Configure database connection parameters
  4. Click Build

The pipeline will install and configure the Database Agent on all hosts.

Verify Agent Installation

After installing agents, verify they appear:

cd /opt/appdynamics/appdsmartagent
sudo ./smartagentctl list

Expected output:

Installed agents:
- machine-agent (running)
- db-agent (running)

Common Deployment Scenarios

Scenario 1: Initial Deployment

Workflow:

  1. Run Deploy-Smart-Agent pipeline
  2. Wait for completion and verify
  3. Run Install-Machine-Agent if needed
  4. Run Install-Database-Agent if needed

Scenario 2: Update Smart Agent

To update Smart Agent to a new version:

  1. Download new Smart Agent ZIP
  2. Place it in Jenkins at configured path
  3. Run Deploy-Smart-Agent pipeline again

The pipeline automatically:

  • Stops existing Smart Agent
  • Removes old files
  • Installs new version
  • Restarts Smart Agent

Scenario 3: Add New Hosts

To add Smart Agent to new hosts:

  1. Update the deployment-hosts credential in Jenkins
  2. Add new IP addresses (one per line)
  3. Run Deploy-Smart-Agent pipeline

The pipeline will:

  • Skip already-configured hosts (if idempotent)
  • Deploy to new hosts only

Scenario 4: Complete Removal

To completely remove Smart Agent from all hosts:

  1. Go to Cleanup-All-Agents pipeline
  2. Click Build with Parameters
  3. Check the CONFIRM_CLEANUP checkbox
  4. Click Build
Details

This will permanently delete /opt/appdynamics/appdsmartagent directory from all hosts. This action cannot be undone.

Troubleshooting Deployments

Build Fails at Preparation Stage

Symptom: Pipeline fails when loading host list

Cause: Missing or incorrect deployment-hosts credential

Solution:

  1. Go to Manage Jenkins β†’ Credentials
  2. Verify deployment-hosts credential exists
  3. Check format (one IP per line, no commas)
  4. Ensure no trailing spaces

SSH Connection Failures

Symptom: “Permission denied” or “Connection refused” errors

Solutions:

Check security group:

# Verify Jenkins agent can reach target
ping 172.31.1.243
telnet 172.31.1.243 22

Test SSH manually:

# From Jenkins agent machine
ssh -i /path/to/key ubuntu@172.31.1.243

Verify SSH key:

  1. Ensure ssh-private-key credential is correct
  2. Verify public key is in ~/.ssh/authorized_keys on target hosts

Smart Agent Fails to Start

Symptom: Deployment completes but Smart Agent not running

Solution:

Check logs on target host:

cd /opt/appdynamics/appdsmartagent
cat log.log

Common issues:

  • Invalid access key: Check account-access-key credential
  • Network connectivity: Verify outbound HTTPS to Controller
  • Permission issues: Ensure APPD_USER has correct permissions

Partial Deployment Success

Symptom: Some hosts succeed, others fail

Solution:

  1. Check Console Output - Identifies which hosts failed
  2. Investigate failed hosts - SSH and test manually
  3. Re-run pipeline - Jenkins tracks which hosts need retry

Pipeline Best Practices

1. Test on Single Host First

Always test new configurations on a single host before deploying to production:

1. Create deployment-hosts-test credential (1 IP)
2. Create test pipeline pointing to this credential
3. Verify success
4. Deploy to full host list

2. Use Descriptive Build Descriptions

After triggering a build, add a description:

  1. Go to build page
  2. Click Edit Build Information
  3. Add description: “Production deployment - Q4 2024”

3. Monitor Build History

Regularly check build history for patterns:

  • Failed builds
  • Duration trends
  • Error messages

4. Schedule Deployments During Maintenance Windows

For production systems:

  • Use Jenkins scheduled builds
  • Deploy during low-traffic periods
  • Have rollback plan ready

5. Keep Credentials Updated

  • Rotate SSH keys quarterly
  • Update host lists as infrastructure changes
  • Verify AppDynamics access key validity

Next Steps

Now let’s explore scaling and operational considerations for large deployments.

Last Modified Nov 17, 2025

GitHub Actions Automation

2 minutes  

Introduction

This workshop demonstrates how to use GitHub Actions with a self-hosted runner to automate the deployment and lifecycle management of AppDynamics Smart Agent across multiple EC2 instances. Whether you’re managing 10 hosts or 10,000, this guide shows you how to leverage GitHub Actions workflows for scalable, secure, and repeatable Smart Agent operations.

GitHub Actions and AppDynamics GitHub Actions and AppDynamics AppDynamics AppDynamics

What You’ll Learn

In this workshop, you’ll learn how to:

  • Deploy Smart Agent to multiple hosts using GitHub Actions workflows
  • Configure GitHub secrets and variables for secure credentials management
  • Set up a self-hosted runner in your AWS VPC
  • Implement automatic batching to scale to thousands of hosts
  • Manage the complete agent lifecycle - install, uninstall, stop, and cleanup
  • Monitor workflow execution and troubleshoot issues

Key Features

  • πŸš€ Parallel Deployment - Deploy to multiple hosts simultaneously
  • πŸ”„ Complete Lifecycle Management - 11 workflows covering all agent operations
  • πŸ—οΈ Infrastructure as Code - All workflows version-controlled in GitHub
  • πŸ” Secure - SSH keys stored as GitHub secrets, private VPC networking
  • πŸ“ˆ Massively Scalable - Deploy to thousands of hosts with automatic batching
  • πŸŽ›οΈ Self-hosted Runner - Executes within your AWS VPC

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  GitHub Actions-based Deployment                 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚  Developer ──▢ GitHub.com ──▢ Self-hosted Runner (AWS VPC)     β”‚
β”‚                                          β”‚                       β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 1 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 2 (SSH)      β”‚
β”‚                                          β”œβ”€β”€β–Ά Host 3 (SSH)      β”‚
β”‚                                          └──▢ Host N (SSH)      β”‚
β”‚                                                                  β”‚
β”‚  All hosts send metrics ──▢ AppDynamics Controller             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Workshop Components

This workshop includes:

  1. Architecture & Design - Understanding the GitHub Actions workflow architecture
  2. GitHub Setup - Configuring secrets, variables, and self-hosted runners
  3. Workflow Creation - Understanding and using the 11 available workflows
  4. Deployment Execution - Running workflows and verifying installations

Available Workflows

This solution includes 11 workflows for complete Smart Agent lifecycle management:

CategoryWorkflowsDescription
Deployment1Deploy and start Smart Agent
Agent Installation4Install Node, Machine, DB, and Java agents
Agent Uninstallation4Uninstall specific agent types
Agent Management2Stop/clean and complete cleanup

All workflows support automatic batching for scalability!

Prerequisites

  • GitHub account with repository access
  • AWS VPC with Ubuntu EC2 instances
  • Self-hosted GitHub Actions runner in the same VPC
  • SSH key pair for authentication
  • AppDynamics Smart Agent package

GitHub Repository

All workflow code and configuration files are available in the GitHub repository:

https://github.com/chambear2809/github-actions-lab

The repository includes:

  • 11 complete workflow YAML files
  • Detailed setup documentation
  • Architecture diagrams
  • Troubleshooting guides
Tip

The easiest way to navigate through this workshop is by using:

  • the left/right arrows (< | >) on the top right of this page
  • the left (◀️) and right (▢️) cursor keys on your keyboard
Last Modified Nov 17, 2025

Subsections of GitHub Actions Automation

Architecture & Design

5 minutes  

System Architecture

The GitHub Actions-based Smart Agent deployment system uses a self-hosted runner within your AWS VPC to orchestrate deployments to multiple target hosts via SSH.

High-Level Architecture

graph TB
    subgraph Internet
        GH[GitHub.com<br/>Repository & Actions]
        User[Developer<br/>Local Machine]
    end

    subgraph AWS["AWS VPC (172.31.0.0/16)"]
        subgraph SG["Security Group: smartagent-lab"]
            Runner[Self-hosted Runner<br/>EC2 Instance<br/>172.31.1.x]
            
            subgraph Targets["Target Hosts"]
                T1[Target Host 1<br/>Ubuntu EC2<br/>172.31.1.243]
                T2[Target Host 2<br/>Ubuntu EC2<br/>172.31.1.48]
                T3[Target Host 3<br/>Ubuntu EC2<br/>172.31.1.5]
            end
        end
    end

    User -->|git push| GH
    GH <-->|HTTPS:443<br/>Poll for jobs| Runner
    Runner -->|SSH:22<br/>Private IPs| T1
    Runner -->|SSH:22<br/>Private IPs| T2
    Runner -->|SSH:22<br/>Private IPs| T3

    style GH fill:#24292e,color:#fff
    style User fill:#0366d6,color:#fff
    style Runner fill:#28a745,color:#fff
    style T1 fill:#ffd33d,color:#000
    style T2 fill:#ffd33d,color:#000
    style T3 fill:#ffd33d,color:#000

Network Architecture

All infrastructure runs in a single AWS VPC with a shared security group. The self-hosted runner communicates with target hosts via private IPs.

VPC Layout

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        AWS VPC (172.31.0.0/16)                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚          Security Group: smartagent-lab                   β”‚  β”‚
β”‚  β”‚  Rules:                                                   β”‚  β”‚
β”‚  β”‚  - Inbound: SSH (22) from same security group            β”‚  β”‚
β”‚  β”‚  - Outbound: HTTPS (443) to GitHub                       β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚  β”‚ Self-hosted β”‚    β”‚  Target EC2  β”‚    β”‚  Target EC2  β”‚       β”‚
β”‚  β”‚   Runner    β”‚    β”‚              β”‚    β”‚              β”‚       β”‚
β”‚  β”‚             │───▢│ Private IP:  β”‚    β”‚ Private IP:  β”‚       β”‚
β”‚  β”‚ 172.31.1.x  β”‚SSH β”‚ 172.31.1.243 β”‚    β”‚ 172.31.1.48  β”‚       β”‚
β”‚  β”‚             │───▢│              β”‚    β”‚              β”‚       β”‚
β”‚  β”‚ Polls GitHubβ”‚    β”‚ Ubuntu 20.04 β”‚    β”‚ Ubuntu 20.04 β”‚       β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β”‚         β”‚                    β”‚                    β”‚             β”‚
β”‚         β”‚                    β”‚                    β”‚             β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚                              β”‚                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   AppDynamics    β”‚
                    β”‚    Controller    β”‚
                    β”‚  (SaaS/On-Prem)  β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Workflow Execution Flow

Complete Deployment Sequence

sequenceDiagram
    participant Dev as Developer
    participant GH as GitHub
    participant Runner as Self-hosted Runner
    participant Target as Target Host(s)

    Dev->>GH: 1. Push code or trigger workflow
    GH->>GH: 2. Workflow event triggered
    Runner->>GH: 3. Poll for jobs (HTTPS:443)
    GH->>Runner: 4. Assign job to runner
    Runner->>Runner: 5. Execute prepare job<br/>(load host matrix)
    
    par Parallel Execution
        Runner->>Target: 6a. SSH to Host 1<br/>(port 22)
        Runner->>Target: 6b. SSH to Host 2<br/>(port 22)
        Runner->>Target: 6c. SSH to Host 3<br/>(port 22)
    end
    
    Target->>Target: 7. Execute commands<br/>(install/uninstall/stop/clean)
    Target-->>Runner: 8. Return results
    Runner-->>GH: 9. Report job status
    GH-->>Dev: 10. Notify completion

Component Details

GitHub Repository

Stores:

  • 11 workflow YAML files
  • Smart Agent installation package
  • Configuration file (config.ini)

Secrets:

  • SSH private key

Variables:

  • Host list (DEPLOYMENT_HOSTS)
  • User/group settings (optional)

Self-hosted Runner

Location:

  • AWS VPC (same as targets)
  • Private network access

Responsibilities:

  • Poll GitHub for workflow jobs
  • Execute workflow steps
  • SSH to target hosts
  • File transfers (SCP)
  • Parallel execution
  • Error collection

Requirements:

  • Ubuntu/Amazon Linux 2
  • Outbound HTTPS (443) to GitHub
  • Outbound SSH (22) to target hosts
  • SSH key authentication

Access:

  • Outbound HTTPS (443) to GitHub
  • Outbound SSH (22) to target hosts
  • Uses SSH key authentication

Target Hosts

Pre-requisites:

  • Ubuntu 20.04+
  • SSH server running
  • User with sudo access
  • Authorized SSH key

Post-deployment:

/opt/appdynamics/
└── appdsmartagent/
    β”œβ”€β”€ smartagentctl
    β”œβ”€β”€ config.ini
    └── agents/
        β”œβ”€β”€ machine/
        β”œβ”€β”€ java/
        β”œβ”€β”€ node/
        └── db/

Security Architecture

Security Layers

  1. AWS VPC Isolation

    • Private subnet for hosts
    • No direct internet access required
    • VPC flow logs enabled
  2. Security Groups

    • SSH (22) within same security group only
    • HTTPS (443) outbound for GitHub access
    • Stateful firewall rules
  3. SSH Key Authentication

    • No password authentication
    • Keys stored in GitHub Secrets
    • Temporary key files on runner
    • Keys removed after workflow
  4. GitHub Security

    • Repository access controls
    • Branch protection rules
    • Secrets never exposed in logs
    • Environment variable masking
  5. Network Security

    • Private IP communication only
    • No public IPs required
    • Runner in same VPC as targets

Workflow Categories

The system includes 11 workflows organized into 4 categories:

GitHub Actions Workflows (11 Total)
β”œβ”€β”€ Deployment (1 workflow)
β”‚   └── Deploy Smart Agent (Batched)
β”œβ”€β”€ Agent Installation (4 workflows)
β”‚   β”œβ”€β”€ Install Node Agent (Batched)
β”‚   β”œβ”€β”€ Install Machine Agent (Batched)
β”‚   β”œβ”€β”€ Install DB Agent (Batched)
β”‚   └── Install Java Agent (Batched)
β”œβ”€β”€ Agent Uninstallation (4 workflows)
β”‚   β”œβ”€β”€ Uninstall Node Agent (Batched)
β”‚   β”œβ”€β”€ Uninstall Machine Agent (Batched)
β”‚   β”œβ”€β”€ Uninstall DB Agent (Batched)
β”‚   └── Uninstall Java Agent (Batched)
└── Smart Agent Management (2 workflows)
    β”œβ”€β”€ Stop and Clean Smart Agent (Batched)
    └── Cleanup All Agents (Batched)

Batching Strategy

All workflows use automatic batching to support deployments at any scale.

How Batching Works

HOST LIST (1000 hosts)              BATCH_SIZE = 256

Host 001: 172.31.1.1                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
Host 002: 172.31.1.2      ────────▢ β”‚   BATCH 1        β”‚
    ...                              β”‚   Hosts 1-256    β”‚ ───┐
Host 256: 172.31.1.256               β”‚   Sequential     β”‚    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
Host 257: 172.31.1.257               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
Host 258: 172.31.1.258   ────────▢  β”‚   BATCH 2        β”‚    β”‚ SEQUENTIAL
    ...                              β”‚   Hosts 257-512  β”‚    β”‚ EXECUTION
Host 512: 172.31.1.512               β”‚   Sequential     β”‚    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
Host 513: 172.31.1.513               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
    ...                              β”‚   BATCH 3        β”‚    β”‚
Host 768: 172.31.1.768   ────────▢  β”‚   Hosts 513-768  β”‚ β”€β”€β”€β”˜
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Host 769: 172.31.1.769               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    ...                              β”‚   BATCH 4        β”‚
Host 1000: 172.31.2.232  ────────▢  β”‚   Hosts 769-1000 β”‚
                                     β”‚   (232 hosts)    β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

WITHIN EACH BATCH:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  All hosts deploy in PARALLEL          β”‚
β”‚                                        β”‚
β”‚  Host 1 ──┐                           β”‚
β”‚  Host 2 ───                           β”‚
β”‚  Host 3 ──┼─▢ Background processes (&)β”‚
β”‚    ...    β”‚                           β”‚
β”‚  Host 256β”€β”˜   └─▢ wait command        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why Sequential Batches?

Resource Management:

  • Prevents overwhelming the self-hosted runner
  • Each batch opens 256 parallel SSH connections
  • Sequential processing ensures stable performance

Configurable:

  • Default batch size: 256 (GitHub Actions matrix limit)
  • Adjustable via workflow input for smaller batches
  • Balance between speed and resource usage

Scaling Characteristics

Deployment Speed (default BATCH_SIZE=256):

  • 10 hosts β†’ 1 batch β†’ ~2 minutes
  • 100 hosts β†’ 1 batch β†’ ~3 minutes
  • 500 hosts β†’ 2 batches β†’ ~6 minutes
  • 1,000 hosts β†’ 4 batches β†’ ~12 minutes
  • 5,000 hosts β†’ 20 batches β†’ ~60 minutes

Factors affecting speed:

  • Network bandwidth (19MB package per host)
  • SSH connection overhead (~1s per host)
  • Target host CPU/disk speed
  • Runner resources (CPU/memory)

Next Steps

Now that you understand the architecture, let’s move on to setting up GitHub and configuring the self-hosted runner.

Last Modified Nov 17, 2025

GitHub Setup

10 minutes  

Prerequisites

Before you begin, ensure you have:

  • GitHub account with repository access
  • AWS VPC with Ubuntu EC2 instances
  • SSH key pair (PEM file) for authentication to target hosts
  • AppDynamics Smart Agent package
  • Target Ubuntu EC2 instances with SSH access

Fork or Clone the Repository

First, get access to the GitHub Actions lab repository:

Repository URL: https://github.com/chambear2809/github-actions-lab

# Option 1: Fork the repository via GitHub UI
# Go to https://github.com/chambear2809/github-actions-lab
# Click "Fork" button

# Option 2: Clone directly (for testing)
git clone https://github.com/chambear2809/github-actions-lab.git
cd github-actions-lab

Configure Self-hosted Runner

Your self-hosted runner must be deployed in the same AWS VPC as your target EC2 instances.

Install Runner on EC2 Instance

  1. Launch EC2 instance in your VPC (Ubuntu or Amazon Linux 2)

  2. Navigate to runner settings in your forked repository:

    Settings β†’ Actions β†’ Runners β†’ New self-hosted runner
  3. SSH into the runner instance and execute installation commands:

# Create runner directory
mkdir actions-runner && cd actions-runner

# Download runner (check GitHub for latest version)
curl -o actions-runner-linux-x64-2.311.0.tar.gz -L \
  https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz

# Extract
tar xzf ./actions-runner-linux-x64-2.311.0.tar.gz

# Configure (use token from GitHub UI)
./config.sh --url https://github.com/YOUR_USERNAME/github-actions-lab --token YOUR_TOKEN

# Install as service
sudo ./svc.sh install

# Start service
sudo ./svc.sh start

Verify Runner Status

Check that the runner appears as “Idle” (green) in:

Settings β†’ Actions β†’ Runners
Tip

The runner must remain online and idle to pick up workflow jobs. If it shows offline, check the service status: sudo ./svc.sh status

Configure GitHub Secrets

Navigate to: Settings β†’ Secrets and variables β†’ Actions β†’ Secrets

SSH Private Key Secret

This secret contains your SSH private key for accessing target hosts.

  1. Click “New repository secret”
  2. Name: SSH_PRIVATE_KEY
  3. Value: Paste the contents of your PEM file
# View your PEM file
cat /path/to/your-key.pem

Example format:

-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA...
...
-----END RSA PRIVATE KEY-----
  1. Click “Add secret”
Important

Never commit SSH keys to your repository! Always use GitHub Secrets for sensitive credentials.

Configure GitHub Variables

Navigate to: Settings β†’ Secrets and variables β†’ Actions β†’ Variables

Deployment Hosts Variable (Required)

This variable contains the list of all target hosts where Smart Agent should be deployed.

  1. Click “New repository variable”
  2. Name: DEPLOYMENT_HOSTS
  3. Value: Enter your target host IPs (one per line)
172.31.1.243
172.31.1.48
172.31.1.5
172.31.10.20
172.31.10.21

Format Requirements:

  • One IP per line
  • No commas
  • No spaces
  • No extra characters
  • Use Unix line endings (LF, not CRLF)
  1. Click “Add variable”

Optional Variables

These variables are optional and used for Smart Agent service user/group configuration:

SMARTAGENT_USER

  1. Click “New repository variable”
  2. Name: SMARTAGENT_USER
  3. Value: e.g., appdynamics
  4. Click “Add variable”

SMARTAGENT_GROUP

  1. Click “New repository variable”
  2. Name: SMARTAGENT_GROUP
  3. Value: e.g., appdynamics
  4. Click “Add variable”

Network Configuration

For the lab setup with all EC2 instances in the same VPC and security group:

Security Group Rules

Inbound Rules:

  • SSH (port 22) from same security group (source: same SG)

Outbound Rules:

  • HTTPS (port 443) to 0.0.0.0/0 (for GitHub API access)
  • SSH (port 22) to same security group (for target access)

Network Best Practices

  • βœ… Use private IP addresses (172.31.x.x) for DEPLOYMENT_HOSTS
  • βœ… Runner and targets in same security group
  • βœ… No public IPs needed on target hosts
  • βœ… Runner communicates via private network
  • βœ… Outbound HTTPS required for GitHub polling

Verify Configuration

Before running workflows, verify your setup:

1. Check Runner Status

  1. Go to Settings β†’ Actions β†’ Runners
  2. Verify runner shows as “Idle” (green)
  3. Check “Last seen” timestamp is recent

2. Test SSH Connectivity

SSH from your runner instance to a target host:

# On runner instance
ssh -i ~/.ssh/your-key.pem ubuntu@172.31.1.243

If successful, you should get a shell prompt on the target host.

3. Verify Secrets and Variables

  1. Go to Settings β†’ Secrets and variables β†’ Actions
  2. Confirm secrets tab shows: SSH_PRIVATE_KEY
  3. Confirm variables tab shows: DEPLOYMENT_HOSTS

4. Check Repository Access

Ensure the runner can access the repository:

# On runner instance, as the runner user
cd ~/actions-runner
./run.sh  # Test run (Ctrl+C to stop)

You should see: “Listening for Jobs”

Troubleshooting Common Issues

Runner Not Picking Up Jobs

Symptom: Workflows stay in “queued” state

Solution:

  • Check runner status: sudo systemctl status actions.runner.*
  • Restart runner: sudo ./svc.sh restart
  • Verify outbound HTTPS (443) connectivity to GitHub

SSH Connection Failures

Symptom: Workflows fail with “Permission denied” or “Connection refused”

Solution:

# Test from runner
ssh -i ~/.ssh/test-key.pem ubuntu@172.31.1.243 -o ConnectTimeout=10

# Check security group allows SSH from runner
# Verify private key matches public key on targets

Invalid Characters in Hostname

Symptom: Error: “hostname contains invalid characters”

Solution:

  • Edit DEPLOYMENT_HOSTS variable
  • Ensure no trailing spaces
  • Use Unix line endings (LF, not CRLF)
  • One IP per line, no extra characters

Secrets Not Found

Symptom: Error: “Secret SSH_PRIVATE_KEY not found”

Solution:

  • Verify secret name exactly matches: SSH_PRIVATE_KEY
  • Check secret is in repository secrets (not environment secrets)
  • Ensure you have repository admin access

Security Best Practices

Follow these best practices for secure operations:

  • βœ… Use GitHub Secrets for all private keys
  • βœ… Rotate SSH keys regularly
  • βœ… Keep runner in private VPC subnet
  • βœ… Restrict runner security group to minimal access
  • βœ… Update runner software regularly
  • βœ… Enable branch protection rules
  • βœ… Use separate keys for different environments
  • βœ… Enable audit logging for repository access

Next Steps

With GitHub configured and the runner set up, you’re ready to explore the available workflows and execute your first deployment!

Last Modified Nov 17, 2025

Understanding Workflows

10 minutes  

Available Workflows

The GitHub Actions lab includes 11 workflows for complete Smart Agent lifecycle management. All workflow files are available in the repository at .github/workflows/.

Repository: https://github.com/chambear2809/github-actions-lab

Workflow Categories

1. Deployment (1 workflow)

Deploy Smart Agent (Batched)

  • File: deploy-agent-batched.yml
  • Purpose: Installs Smart Agent and starts the service
  • Features:
    • Automatic batching (default: 256 hosts per batch)
    • Configurable batch size
    • Parallel deployment within each batch
    • Sequential batch processing
  • Inputs:
    • batch_size: Number of hosts per batch (default: 256)
  • Trigger: Manual only (workflow_dispatch)

2. Agent Installation (4 workflows)

All installation workflows use smartagentctl to install specific agent types:

Install Node Agent (Batched)

  • File: install-node-batched.yml
  • Command: smartagentctl install node
  • Batched: Yes (configurable)

Install Machine Agent (Batched)

  • File: install-machine-batched.yml
  • Command: smartagentctl install machine
  • Batched: Yes (configurable)

Install DB Agent (Batched)

  • File: install-db-batched.yml
  • Command: smartagentctl install db
  • Batched: Yes (configurable)

Install Java Agent (Batched)

  • File: install-java-batched.yml
  • Command: smartagentctl install java
  • Batched: Yes (configurable)

3. Agent Uninstallation (4 workflows)

All uninstallation workflows use smartagentctl to remove specific agent types:

Uninstall Node Agent (Batched)

  • File: uninstall-node-batched.yml
  • Command: smartagentctl uninstall node
  • Batched: Yes (configurable)

Uninstall Machine Agent (Batched)

  • File: uninstall-machine-batched.yml
  • Command: smartagentctl uninstall machine
  • Batched: Yes (configurable)

Uninstall DB Agent (Batched)

  • File: uninstall-db-batched.yml
  • Command: smartagentctl uninstall db
  • Batched: Yes (configurable)

Uninstall Java Agent (Batched)

  • File: uninstall-java-batched.yml
  • Command: smartagentctl uninstall java
  • Batched: Yes (configurable)

4. Smart Agent Management (2 workflows)

Stop and Clean Smart Agent (Batched)

  • File: stop-clean-smartagent-batched.yml
  • Commands:
    • smartagentctl stop
    • smartagentctl clean
  • Purpose: Stops the Smart Agent service and purges all data
  • Batched: Yes (configurable)

Cleanup All Agents (Batched)

  • File: cleanup-appdynamics.yml
  • Command: sudo rm -rf /opt/appdynamics
  • Purpose: Completely removes the /opt/appdynamics directory
  • Batched: Yes (configurable)
  • Warning: This permanently deletes all AppDynamics components
Details

The “Cleanup All Agents” workflow permanently deletes /opt/appdynamics. This action cannot be undone. Use with caution!

Workflow Structure

All batched workflows follow a consistent two-job structure:

Job 1: Prepare

prepare:
  runs-on: self-hosted
  outputs:
    batches: ${{ steps.create-batches.outputs.batches }}
  steps:
    - name: Load hosts and create batches
      run: |
        # Load DEPLOYMENT_HOSTS variable
        # Split into batches of N hosts
        # Output as JSON array

Purpose: Loads target hosts from GitHub variables and creates batch matrix

Job 2: Deploy/Install/Uninstall

deploy:
  needs: prepare
  runs-on: self-hosted
  strategy:
    matrix:
      batch: ${{ fromJson(needs.prepare.outputs.batches) }}
  steps:
    - name: Setup SSH key
    - name: Execute operation on all hosts in batch (parallel)

Purpose: Runs in parallel for each batch, executing the specific operation on all hosts within the batch

Batching Behavior

How It Works

  1. Prepare Job loads DEPLOYMENT_HOSTS and splits into batches
  2. Deploy Job creates one matrix entry per batch
  3. Batches process sequentially to avoid overwhelming the runner
  4. Within each batch, all hosts deploy in parallel using background processes

Configurable Batch Size

All workflows accept a batch_size input (default: 256):

# Via GitHub CLI
gh workflow run "Deploy Smart Agent" -f batch_size=128

# Via GitHub UI
Actions β†’ Select workflow β†’ Run workflow β†’ Set batch_size

Examples

  • 100 hosts, batch_size=256: 1 batch, ~3 minutes
  • 500 hosts, batch_size=256: 2 batches, ~6 minutes
  • 1,000 hosts, batch_size=128: 8 batches, ~16 minutes
  • 5,000 hosts, batch_size=256: 20 batches, ~60 minutes

Workflow Execution Order

Typical Deployment Sequence

  1. Deploy Smart Agent - Initial deployment
  2. Install Machine Agent - Install specific agents as needed
  3. Install DB Agent - Install database monitoring
  4. (Use other install workflows as needed)

Maintenance/Update Sequence

  1. Stop and Clean Smart Agent - Stop services and clean data
  2. Deploy Smart Agent - Redeploy with updated version
  3. Install agents again - Reinstall required agents

Complete Removal Sequence

  1. Stop and Clean Smart Agent - Stop services
  2. Cleanup All Agents - Remove /opt/appdynamics directory

Viewing Workflow Code

You can view the complete workflow YAML files in the repository:

Main deployment workflow: https://github.com/chambear2809/github-actions-lab/blob/main/.github/workflows/deploy-agent-batched.yml

All workflows: https://github.com/chambear2809/github-actions-lab/tree/main/.github/workflows

Workflow Features

Built-in Error Handling

  • Per-host error tracking
  • Failed host reporting
  • Batch-level failure handling
  • Always-executed summary

Parallel Execution

  • All hosts within a batch deploy simultaneously
  • Uses SSH background processes (&)
  • Wait command ensures all complete
  • Maximum parallelism within resource limits

Security

  • SSH keys never exposed in logs
  • Credentials bound as environment variables
  • Strict host key checking disabled for automation
  • Keys removed after workflow completion

Next Steps

Now that you understand the available workflows, let’s execute your first deployment!

Last Modified Nov 17, 2025

Running Workflows

15 minutes  

Triggering Workflows

All workflows are configured with workflow_dispatch, meaning they must be triggered manually. There are two main ways to run workflows:

  1. GitHub UI - Visual interface, easiest for most users
  2. GitHub CLI - Command-line interface, great for automation

Method 1: GitHub UI

Step 1: Navigate to Actions Tab

  1. Go to your forked repository on GitHub
  2. Click the Actions tab at the top

Step 2: Select Workflow

On the left sidebar, you’ll see all available workflows:

  • Deploy Smart Agent
  • Install Node Agent (Batched)
  • Install Machine Agent (Batched)
  • Install DB Agent (Batched)
  • Install Java Agent (Batched)
  • Uninstall Node Agent (Batched)
  • Uninstall Machine Agent (Batched)
  • Uninstall DB Agent (Batched)
  • Uninstall Java Agent (Batched)
  • Stop and Clean Smart Agent (Batched)
  • Cleanup All Agents

Click on the workflow you want to run.

Step 3: Run Workflow

  1. Click “Run workflow” button (top right)
  2. Select branch: main
  3. (Optional) Adjust batch_size if desired
  4. Click “Run workflow” button

Step 4: Monitor Execution

  1. The workflow will appear in the list below
  2. Click on the workflow run to view details
  3. Watch progress in real-time
  4. Click on job names to see detailed logs

Method 2: GitHub CLI

Install GitHub CLI

# macOS
brew install gh

# Linux
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update
sudo apt install gh

Authenticate

gh auth login

Run Workflows

# Deploy Smart Agent (default batch size)
gh workflow run "Deploy Smart Agent" --repo YOUR_USERNAME/github-actions-lab

# Deploy with custom batch size
gh workflow run "Deploy Smart Agent" \
  --repo YOUR_USERNAME/github-actions-lab \
  -f batch_size=128

# Install agents
gh workflow run "Install Node Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

gh workflow run "Install Machine Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

# Uninstall agents
gh workflow run "Uninstall Node Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

# Stop and clean
gh workflow run "Stop and Clean Smart Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

# Complete cleanup
gh workflow run "Cleanup All Agents" \
  --repo YOUR_USERNAME/github-actions-lab

Monitor Workflows

# List recent workflow runs
gh run list --repo YOUR_USERNAME/github-actions-lab

# View specific run
gh run view RUN_ID --repo YOUR_USERNAME/github-actions-lab

# Watch run in progress
gh run watch RUN_ID --repo YOUR_USERNAME/github-actions-lab

# View failed logs
gh run view RUN_ID --log-failed --repo YOUR_USERNAME/github-actions-lab

First Deployment Walkthrough

Let’s walk through a complete first-time deployment:

Step 1: Verify Setup

Before running any workflows, ensure:

  • βœ… Self-hosted runner shows “Idle” (green)
  • βœ… SSH_PRIVATE_KEY secret is configured
  • βœ… DEPLOYMENT_HOSTS variable contains your target IPs
  • βœ… Network connectivity verified

Step 2: Deploy Smart Agent

Via GitHub UI:

  1. Go to Actions tab
  2. Select “Deploy Smart Agent”
  3. Click “Run workflow”
  4. Accept default batch_size (256)
  5. Click “Run workflow”

Via GitHub CLI:

gh workflow run "Deploy Smart Agent" --repo YOUR_USERNAME/github-actions-lab

Step 3: Monitor Execution

The workflow will show:

  1. Prepare job - Creating batch matrix
  2. Deploy job (one per batch) - Deploying to hosts

Click on each job to view detailed logs.

Step 4: Verify Installation

SSH into a target host and check:

# SSH to target
ssh ubuntu@172.31.1.243

# Check Smart Agent status
cd /opt/appdynamics/appdsmartagent
sudo ./smartagentctl status

Expected output:

Smart Agent is running (PID: 12345)
Service: appdsmartagent.service
Status: active (running)

Step 5: Install Additional Agents (Optional)

If needed, install specific agent types:

# Install Machine Agent
gh workflow run "Install Machine Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

# Install DB Agent
gh workflow run "Install DB Agent (Batched for Large Scale)" \
  --repo YOUR_USERNAME/github-actions-lab

Understanding Workflow Output

Prepare Job Output

Loading deployment hosts...
Total hosts: 1000
Batch size: 256
Creating 4 batches...
Batch 1: Hosts 1-256
Batch 2: Hosts 257-512
Batch 3: Hosts 513-768
Batch 4: Hosts 769-1000

Deploy Job Output (per batch)

Processing batch 1 of 4
Deploying to 256 hosts in parallel...
Host 172.31.1.1: SUCCESS
Host 172.31.1.2: SUCCESS
Host 172.31.1.3: SUCCESS
...
Batch 1 complete: 256/256 succeeded

Completion Summary

Deployment Summary:
Total hosts: 1000
Successful: 998
Failed: 2
Failed hosts:
  - 172.31.1.48
  - 172.31.1.125

Common Deployment Scenarios

Scenario 1: Initial Deployment

# 1. Deploy Smart Agent
gh workflow run "Deploy Smart Agent"

# 2. Verify deployment
# SSH to a host and check status

# 3. Install agents as needed
gh workflow run "Install Machine Agent (Batched for Large Scale)"
gh workflow run "Install DB Agent (Batched for Large Scale)"

Scenario 2: Update Smart Agent

# 1. Stop and clean current installation
gh workflow run "Stop and Clean Smart Agent (Batched for Large Scale)"

# 2. Update Smart Agent ZIP in repository
# (Push new version to repository)

# 3. Deploy new version
gh workflow run "Deploy Smart Agent"

# 4. Reinstall agents
gh workflow run "Install Machine Agent (Batched for Large Scale)"

Scenario 3: Add New Hosts

# 1. Update DEPLOYMENT_HOSTS variable in GitHub
# Add new IPs

# 2. Deploy to all hosts (will skip existing ones with idempotent logic)
gh workflow run "Deploy Smart Agent"

Scenario 4: Complete Removal

# 1. Stop and clean
gh workflow run "Stop and Clean Smart Agent (Batched for Large Scale)"

# 2. Complete removal
gh workflow run "Cleanup All Agents"
Details

“Cleanup All Agents” permanently deletes /opt/appdynamics. This cannot be undone!

Troubleshooting Workflow Failures

Workflow Stays in “Queued”

Symptom: Workflow doesn’t start

Cause: Runner not available or offline

Solution:

  1. Check runner status: Settings β†’ Actions β†’ Runners
  2. Verify runner shows “Idle” (green)
  3. Restart runner if needed: sudo ./svc.sh restart

SSH Connection Failures

Symptom: “Permission denied” or “Connection refused” errors

Solutions:

Test SSH manually:

# From runner instance
ssh -i ~/.ssh/test-key.pem ubuntu@172.31.1.243

Check security groups:

  • Verify SSH (22) allowed from runner
  • Confirm runner and targets in same security group

Verify SSH key:

  • Ensure SSH_PRIVATE_KEY secret matches actual key
  • Verify public key is on target hosts

Partial Batch Failures

Symptom: Some hosts succeed, others fail

Solution:

  1. View workflow logs to identify failed hosts
  2. SSH to failed hosts to investigate
  3. Re-run workflow (idempotent - skips successful hosts)

Batch Job Errors

Symptom: “Error splitting hosts into batches”

Solution:

  • Check DEPLOYMENT_HOSTS variable format
  • Ensure one IP per line
  • No trailing spaces or special characters
  • Use Unix line endings (LF, not CRLF)

Performance Tuning

Adjusting Batch Size

Smaller batches (fewer resources, slower):

gh workflow run "Deploy Smart Agent" -f batch_size=128

Larger batches (more resources, faster):

gh workflow run "Deploy Smart Agent" -f batch_size=256

Runner Resource Recommendations

HostsCPUMemoryBatch Size
1-1002 cores4 GB256
100-5004 cores8 GB256
500-20008 cores16 GB256
2000+16 cores32 GB256

Best Practices

  1. Test on single host first

    • Create a test variable with 1 IP
    • Run workflow to verify
    • Then deploy to full list
  2. Monitor workflow execution

    • Watch logs in real-time
    • Check for errors immediately
    • Verify on sample hosts
  3. Use appropriate batch sizes

    • Default (256) works for most cases
    • Reduce if runner struggles
    • Monitor runner resource usage
  4. Keep workflows up to date

    • Pull latest changes from repository
    • Test updates on non-production first
    • Document any customizations
  5. Maintain runner health

    • Keep runner online and idle
    • Update runner software regularly
    • Monitor disk space and resources

Next Steps

Congratulations! You’ve successfully learned how to automate AppDynamics Smart Agent deployment using GitHub Actions. For more information, visit the complete repository.