Developer Reference: Architecture and Internals¶
Overview¶
This document covers the internal architecture of the SAP LogServ TA’s filtering and deployment automation system. It is intended for developers who need to maintain, extend, or test the TA.
The filtering system provides index-time event filtering via TRANSFORMS-based queue routing. It includes a UI built on UCC (Splunk Universal Configuration Console), custom REST handlers, deployment server automation, and background scripted inputs for daily maintenance and upgrade detection.
Architecture Summary¶
+-------------------------------------------------------+
| Splunk Web (UCC Configuration UI) |
| Configuration -> Filters tab |
| + filter_settings_hook.js (Deploy button, banners) |
+---------------------------+---------------------------+
| Save
v
+-------------------------------------------------------------+
| splunk_ta_sap_logserv_rh_filter_settings.py (REST Handler) |
| - Validates patterns |
| - Saves settings to settings conf |
| - Generates local/transforms.conf + local/props.conf |
| - Mirrors to deployment-apps/ (if DS) |
| - Creates server class (if DS, first time) |
| - Reloads confs via REST API |
+---------------------------+---------------------------------+
|
+-------------+-------------+
v v
+------------+ +--------------------+
| Single | | Deployment Server |
| Instance | | |
| (done) | | deployment-apps/ |
+------------+ | + serverclass.conf |
| + Deploy button |
+---------+----------+
| Phone home
v
+------------------+
| Heavy Forwarders |
| (receive TA + |
| filter configs) |
+------------------+
Source File Map¶
All source files live under the UCC package directory:
sap_logserv_package/splunk_ta_sap_logserv/
├── globalConfig.json # UCC UI definition (Filters tab)
├── additional_packaging.py # UCC build hook (web.conf expose)
└── package/
├── app.manifest # TA version and metadata
├── bin/
│ ├── splunk_ta_sap_logserv_filter_utils.py # Core library
│ ├── splunk_ta_sap_logserv_rh_filter_settings.py # REST handler: Filters tab
│ ├── splunk_ta_sap_logserv_rh_deployment_push.py # REST handler: Deploy button
│ ├── logserv_filter_time_refresh.py # Daily epoch cutoff refresh
│ └── logserv_filter_upgrade_check.py # Upgrade coverage check
├── default/
│ ├── transforms.conf # Sourcetype routing + @logserv_filter annotations
│ ├── props.conf # Sourcetype configs, TRANSFORMS chains
│ ├── inputs.conf # Scripted input schedules
│ └── macros.conf # Index name redirect macro
├── metadata/
│ └── default.meta # Default permissions and export settings
└── appserver/static/js/build/custom/
└── filter_settings_hook.js # UCC hook (DS detection, Deploy button)
Key Components¶
Core Library¶
splunk_ta_sap_logserv_filter_utils.py
This is the central module. All other components import from it.
Key Functions¶
| Function | Purpose |
|---|---|
get_app_path() |
Determines the app installation path (checks SPLUNK_HOME, falls back to relative path) |
discover_supported_types(app_path) |
Scans default/transforms.conf for @logserv_filter annotations |
find_uncovered_types(supported, patterns) |
Compares supported types against user include patterns |
parse_comma_patterns(string) |
Splits comma-separated pattern string into list |
validate_patterns(patterns, field_name) |
Validates pattern syntax (dir/subdir format, valid characters) |
validate_single_pattern(pattern) |
Validates one pattern against rules |
fnmatch_patterns_to_combined_regex(patterns) |
Converts a list of fnmatch patterns into a single combined regex using alternation |
epoch_cutoff_from_days(days_in_past) |
Computes the epoch timestamp for midnight N days ago (UTC) |
generate_epoch_less_than_regex(cutoff_epoch) |
Generates a digit-by-digit regex matching epoch values less than the cutoff |
generate_transforms_stanzas(include, exclude, days) |
Generates filter transform stanzas with regex |
generate_props_filter_lines(include, exclude, days, enabled) |
Generates the TRANSFORMS-00-filter line |
write_local_conf(app_path, conf_type, content) |
Writes between marker comments in local conf files |
get_ta_version(app_path) |
Reads version from default/app.conf (falls back to app.manifest if not present) |
get_server_roles(session_key) |
Queries /services/server/info for roles |
is_deployment_server(session_key) |
Two-step DS detection (roles + client probe) |
get_deployment_apps_path() |
Returns the etc/deployment-apps/ path for the TA |
ensure_deployment_app_synced(app_path) |
Full app copy/upgrade to deployment-apps/ |
mirror_to_deployment_apps(app_path) |
Copies local/transforms.conf and local/props.conf |
ensure_serverclass(session_key) |
Creates server class + app mapping via REST + file |
Constants¶
| Constant | Value | Purpose |
|---|---|---|
APP_NAME |
splunk_ta_sap_logserv |
App directory name |
SERVERCLASS_NAME |
SAP_LogServ_HeavyForwarders |
Auto-created server class name |
SETTINGS_CONF |
splunk_ta_sap_logserv_settings |
UCC settings conf file |
FILTER_STANZA |
filter_settings |
Stanza name in settings conf |
SYSTEM_MESSAGE_NAME |
logserv_filter_upgrade_warning |
System message banner name for upgrade warnings |
ANNOTATION_PATTERN |
Compiled regex | Matches @logserv_filter: annotation comment lines |
VALID_SEGMENT_PATTERN |
Compiled regex | Validates characters in dir/subdir pattern segments |
FILTER_MARKER_START / FILTER_MARKER_END |
Marker comments | Delimit generated content in local conf files |
Filters Tab REST Handler¶
splunk_ta_sap_logserv_rh_filter_settings.py
UCC REST handler (extends AdminExternalHandler) registered in globalConfig.json. Handles the Save action:
- Validates patterns server-side (blocks save on failure via
admin.ArgValidationException) - Saves settings to
splunk_ta_sap_logserv_settings.conf(default UCC behavior) - Generates
local/transforms.confandlocal/props.confwith filter stanzas - Syncs and mirrors to deployment-apps if on a DS (full app copy/upgrade + filter configs)
- Creates server class if on a DS (first time only)
- Reloads confs via REST API (
/configs/conf-transforms/_reload,/configs/conf-props/_reload) - Checks for uncovered types and manages system message banner
Deploy Button REST Handler¶
splunk_ta_sap_logserv_rh_deployment_push.py
Persistent REST handler (PersistentServerConnectionApplication) registered in restmap.conf. Provides two endpoints:
- GET
/services/splunk_ta_sap_logserv/deployment_push— Returnsis_deployment_serverboolean and server class status (used by the JS hook to render the UI) - POST — Triggers deployment reload via
/services/deployment/server/config/_reload
Note
Persistent handlers do NOT get import_declare_test. They require explicit sys.path setup for the app’s bin/ and lib/ directories at the top of the file.
Note
Custom persistent endpoints require an [expose:] stanza in web.conf to be accessible through the Splunk Web proxy (port 8000). This is handled by additional_packaging.py during the UCC build.
UCC Hook¶
filter_settings_hook.js
JavaScript hook loaded by UCC on the Filters tab. Lifecycle methods:
onCreate/onRender/onEditLoad— CallscheckDeploymentServer()to GET the deployment push endpoint. If DS detected, injects the deploy banner, server class guidance notices, and Deploy button into the DOMonSaveSuccess— Triggerswindow.location.reload()after 500ms to reflect server-side changes
Daily Time Refresh¶
logserv_filter_time_refresh.py
Scripted input (runs every 86400 seconds / once per day):
- Reads filter settings from the settings conf via REST
- Regenerates
local/transforms.confandlocal/props.confwith updated epoch cutoff regex - Mirrors to deployment-apps if on a DS
- Reloads confs
Deployment client guard: If the instance is a deployment client (HF) but NOT a deployment server, the script skips execution entirely. This prevents HFs from overwriting filter configs pushed by the DS.
Upgrade Coverage Check¶
logserv_filter_upgrade_check.py
Scripted input (runs every 600 seconds / 10 minutes):
- Compares current TA version against last-checked version (persisted in state file)
- On version change, discovers supported types from
@logserv_filterannotations - Compares against user’s include patterns
- Creates a Splunk system message banner if uncovered types are found
Performance
The version check is ~2ms on unchanged runs. Full comparison only runs once per version change.
How Filtering Works Technically¶
Filter Chain¶
All filtering happens via a single TRANSFORMS-00-filter line in local/props.conf under the [sap_logserv_logs] stanza. The 00 prefix ensures filters run before any sourcetype routing transforms (01–99).
The filter chain is evaluated left to right:
logserv_filter_include_drop— Drops ALL events (sends tonullQueue)logserv_filter_include_allow— Rescues events matching include patterns (sends back toindexQueue)logserv_filter_time_drop— Drops events with old timestamps (sends tonullQueue)logserv_filter_exclude— Drops events matching exclude patterns (sends tonullQueue)
This “deny-all, then allow” approach ensures only explicitly included events pass through.
Include/Exclude Regex Generation¶
User-facing fnmatch patterns (e.g., linux/*) are converted to Splunk-compatible regex that matches against the raw NDJSON event data. The include_allow regex uses lookahead assertions to match clz_dir and clz_subdir fields:
^(?=.*"clz_dir"\s*:\s*"linux")(?=.*"clz_subdir"\s*:\s*".+")
Multiple include patterns are OR’d together with |.
Time Filter Regex¶
The time filter matches epoch timestamps in the raw JSON "_time" field. Since TRANSFORMS cannot use dynamic expressions (unlike INGEST_EVAL which is unavailable on Splunk Cloud), the cutoff is a pre-computed regex that matches epoch values less than the cutoff. This regex must be refreshed daily to maintain accuracy.
Failure mode
If the daily refresh doesn’t run, the cutoff becomes one day older, filtering slightly more data. This is the safer direction.
Why TRANSFORMS Instead of INGEST_EVAL¶
INGEST_EVAL is the preferred modern approach for index-time filtering, but it is unavailable on Splunk Cloud (heavy forwarders managed by Splunk Cloud don’t support it). TRANSFORMS-based filtering works on all deployment architectures including Splunk Cloud with on-premises Heavy Forwarders.
Deployment Server Automation¶
DS Detection¶
Two-step detection in is_deployment_server():
- Fast path — Check
server_rolesfrom/services/server/infofordeployment_server - Fallback — If role is absent, query
/services/deployment/server/clientsto check for connected deployment clients. Returns true only if at least one client is connected
Why the fallback exists
Splunk drops the deployment_server role when no serverclass.conf exists. The /services/deployment/server/config endpoint cannot be used as a fallback because it returns HTTP 200 on ALL Splunk Enterprise instances (including HFs), causing false positives.
Server Class Creation¶
ensure_serverclass() creates SAP_LogServ_HeavyForwarders in three steps:
-
REST API create — POST to
/services/deployment/server/serverclasseswithnameparameter to create the server classThe
disabledparameter is NOT supported during creation (Splunk returns an error). -
REST API disable — Separate POST to
/services/deployment/server/serverclasses/SAP_LogServ_HeavyForwarderswithdisabled=trueto disable the server class. This ensures no deployment occurs until the admin configures client targeting and enables the server class in Forwarder Management. -
File-based app mapping — After REST creates and disables the server class, the function locates the resulting
serverclass.conf(which Splunk may write tosystem/local/orapps/search/local/) and appends the app mapping stanza:
[serverClass:SAP_LogServ_HeavyForwarders:app:splunk_ta_sap_logserv]
restartSplunkd = true
stateOnClient = enabled
Note
The app mapping cannot be created via REST API — the URL path format serverclasses/{name}/app:{appname} returns HTTP 404.
Deployment Client Guard¶
The logserv_filter_time_refresh.py scripted input checks whether the instance is a deployment client (but not a DS). If so, it skips execution to prevent overwriting filter configs that were pushed by the DS. Without this guard, the time refresh script would read the HF’s empty local settings, regenerate configs with only a time filter, and overwrite the complete filter chain.
Adding Support for New Sourcetypes¶
Step 1: Add a Routing Transform¶
Add the @logserv_filter annotation on the line immediately above the stanza header in default/transforms.conf:
# @logserv_filter: newdir/newsubdir
[set_srctype_for_new_logtype]
REGEX = "clz_subdir":"newsubdir"
FORMAT = sourcetype::sap:newlogtype
DEST_KEY = MetaData:Sourcetype
The annotation is critical. It declares which clz_dir/clz_subdir values the transform handles. Without it, the upgrade check cannot detect that a new log type is supported, and users won’t be notified.
Multiple values can be comma-separated:
# @logserv_filter: newdir/type_a, newdir/type_b, newdir/type_c
[set_srctype_for_new_dir]
REGEX = "clz_subdir":"(type_a|type_b|type_c)"
FORMAT = sourcetype::sap:newdir:logs
DEST_KEY = MetaData:Sourcetype
When the same clz_subdir value exists under multiple clz_dir paths (e.g., audit appears under both abap/ and scc/), use a compound lookahead regex to match both fields and avoid routing collisions:
# @logserv_filter: scc/audit
[set_srctype_scc_audit]
REGEX = (?=.*"clz_dir":"scc")(?=.*"clz_subdir":"audit")
FORMAT = sourcetype::sap:scc:audit
DEST_KEY = MetaData:Sourcetype
When to use compound lookahead: Check if the clz_subdir you are adding already exists in another clz_dir path. If it does, both the existing and new transforms must use compound lookahead. Current collision-prone values include audit (abap, scc), sapstartsrv (abap, sap), and tracelogs (hana, scc).
Step 2: Add to TRANSFORMS Chain¶
Add your new transform to the appropriate TRANSFORMS-* line in the [sap_logserv_logs] stanza of default/props.conf:
[sap_logserv_logs]
...
TRANSFORMS-07-srctype_for_newdir = set_srctype_for_new_logtype
Use a number between 01 and 98 for the TRANSFORMS prefix. 00 is reserved for filters and 99 is reserved for set_raw_only.
Step 3: Add Sourcetype Configuration¶
If the new sourcetype needs field extractions, calculated fields, or CIM field aliases, add a [sap:newlogtype] stanza to default/props.conf.
Step 4: Bump the Version¶
Update the version in package/app.manifest (and globalConfig.json if applicable). This triggers the upgrade check to compare the new annotations against existing user include patterns.
What Happens on Upgrade
- The
logserv_filter_upgrade_check.pyscripted input detects the version change within 10 minutes - It scans the updated
default/transforms.conffor@logserv_filterannotations - If the user’s include patterns don’t cover the new log types, a system message banner appears across all Splunk Web pages
- The user updates their include patterns (or uses
*/*) and the banner clears
Testing Environments¶
Environment 1: Single Instance (Standalone)¶
Setup: One Splunk Enterprise instance acting as Search Head, Indexer, and data receiver.
What to test
- Filter save generates correct
local/transforms.confandlocal/props.conf - Conf reload works without restart
- Include, exclude, and time filters work correctly in search results
- Disabling filtering clears the generated conf files
- Pattern validation blocks invalid input
- No deployment server UI elements appear (no banner, no deploy button)
Environment 2: Deployment Server + Heavy Forwarders¶
Setup: Three instances minimum: DS (can be combined with SH/Indexer), HF-01 (deployment client), HF-02 (deployment client).
What to test
- DS detection works (banner and deploy button appear)
- Filter save triggers full app copy to
deployment-apps/ - Filter configs are mirrored to
deployment-apps/local/ - Server class
SAP_LogServ_HeavyForwardersis auto-created with app mapping - Client targeting (IP-based) matches HFs correctly
- Deploy button triggers reload and HFs receive the TA
- Filter update round-trip: change on DS → deploy → verify on HFs
- Time refresh script skips on HFs (deployment client guard)
- Time refresh script runs correctly on DS and mirrors updated configs
Environment 3: Splunk Cloud + On-Premises Heavy Forwarders¶
Setup: Splunk Cloud instance (Search Head / Indexer), separate on-premises DS, on-premises HFs configured as deployment clients.
What to test
- Same as Environment 2, but additionally verify:
- The TA is installed on the Splunk Cloud instance for dashboards and search-time knowledge objects, but filtering is NOT configured there
- Filter configuration and TA distribution to HFs are managed entirely from the on-premises DS
- Filtering works at the HF level before data reaches Splunk Cloud
- No INGEST_EVAL dependency (TRANSFORMS-only filtering)
Environment 4: Fresh DS with No Prior Server Classes¶
Setup: DS with deployment clients connected but no serverclass.conf exists.
What to test
- DS detection works via the fallback (connected clients check)
ensure_serverclass()createsserverclass.conffrom scratch- The
deployment_serverrole activates after server class creation - Full deployment workflow completes successfully
Known Gotchas and Technical Notes¶
Local Imports in Utility Functions¶
Functions in splunk_ta_sap_logserv_filter_utils.py use local imports (inside the function body) for Splunk-specific modules like splunk.rest and json. This is intentional — these modules are unavailable during the UCC build step, so importing them at the module level would break the build.
The critical detail is that local imports are scoped to the function they are declared in. They do NOT carry into sibling functions. Each function that needs splunk.rest or json must import them independently. For example, both get_server_roles() and is_deployment_server() need their own local import splunk.rest as rest and import json statements — the imports inside get_server_roles() are not visible to is_deployment_server() even though one calls the other.
Forgetting a local import inside a function that has a bare except Exception: pass block is particularly dangerous because the resulting NameError is silently swallowed, causing the function to return a default value without any log output.
Persistent Handler sys.path¶
Persistent REST handlers (splunk_ta_sap_logserv_rh_deployment_push.py) do NOT get import_declare_test from UCC. They require explicit sys.path setup at the top of the file to import from the app’s bin/ and lib/ directories. Without this, the handler returns 500 errors from splunkd.
web.conf Expose Stanza¶
Custom persistent endpoints are not accessible through the Splunk Web proxy (port 8000) by default. They require an [expose:] stanza in web.conf. This is injected by additional_packaging.py during the UCC build:
[expose:splunk_ta_sap_logserv_deployment_push]
pattern = splunk_ta_sap_logserv/deployment_push
methods = POST, GET
Server Class REST API Limitations¶
- The
disabledparameter is NOT accepted during server class creation. Create first, then POST to the specific server class URL to disable - App mappings (
:app:appnamestanzas) CANNOT be created via REST API. The URL format returns 404. Use file-based append instead - Splunk may write
serverclass.confto different locations (system/local/orapps/search/local/). The code searches multiple locations
Deployment Client Config Overwrite¶
Without the deployment client guard in logserv_filter_time_refresh.py, HFs would regenerate filter configs from their own (empty) local settings on the daily time refresh run. This overwrites the complete filter chain pushed by the DS with only a time filter. The guard checks for the deployment_client role and skips execution if found.
TRANSFORMS-00-filter Ordering¶
The filter TRANSFORMS line MUST use prefix 00 to ensure it runs before sourcetype routing (01–99). If filters ran after routing, events would already have their sourcetype set but would then be dropped, wasting processing.
Marker Comments¶
Generated filter content in local/transforms.conf and local/props.conf is wrapped in marker comments:
### BEGIN LOGSERV FILTER CONFIG - DO NOT EDIT MANUALLY ###
...
### END LOGSERV FILTER CONFIG ###
The write_local_conf() function replaces content between these markers (or appends them if not present). Manual customizations outside the markers are preserved.
DS Role Disappears Without Server Classes¶
Splunk removes the deployment_server role from server_roles when no server class is defined in any serverclass.conf. The is_deployment_server() fallback handles this by checking for connected deployment clients via the /services/deployment/server/clients endpoint. This endpoint returns 0 clients on HFs (no false positives) and returns connected clients on a real DS even without any server classes.
The /services/deployment/server/config endpoint CANNOT be used as a fallback because it returns HTTP 200 on ALL Splunk Enterprise instances, including HFs.