Discover the Power of SendResults: A Life-Changing Splunk Command and Alert Action

Are you tired of hardcoding email addresses into your searches and alerts? Do you want a more dynamic way to send search results to individuals based on the data within your search results? Look no further than SendResults, a powerful Splunk command and alert action developed by Discovered Intelligence.

Read more

ChatGPT and SPL: A Dynamic Duo for Learning Splunk’s Query Language

If you haven’t heard of ChatGPT yet, you likely have blocked notifications on social networks like Linkedin, Twitter or Reddit, as everyone is talking about the benefits (and concerns) of artificial intelligence. However, it’s ChatGPT who gets the lion’s share of the limelight in this story.

Read more

Wiring up the Splunk OpenTelemetry Collector for Kubernetes

Organizations of all sizes are building / migrating / refactoring their software to be cloud-native. Applications are broken down into microservices and deployed as containers. Consequently there has been a seismic shift in the complexity of application components thanks to the intricate network of microservices calling each other. The traditional sense of “monitoring” them no longer makes sense, especially because containers are ephemeral in nature and are treated as cattle, instead of as pets.

Read more

What to Consider When Creating Splunk Workload Management Rule Conditions

splunk workload management rule conditions

Workload management is a powerful Splunk Enterprise feature for users to delegate CPU and memory resources to various Splunk workloads, based on their preferences. As Splunk continues to develop new attributes for the defining of rules, the number of Splunk users who are enabling workload management in their environment is gradually increasing.

Read more

Save Time and Improve your Security Posture with Splunk Attack Range

The security posture of organizations is one of the most important factors year after year when it comes to defining internal strategies.

“Global cyberattacks increased by 38% in 2022 compared to 2021, with an average of 1168 weekly attacks per organization”

~ Check Point Research

The quote from Check Point Research above illustrates where the future trend of cybersecurity is headed and the challenges that organizations must face. However, anticipating and preparing the system defenses to evade and mitigate these attacks is not an easy task. From defining response and incident strategies to preparing work teams and configuring monitoring systems, it can all be a challenge.

Your core business is not to detect and mitigate security attacks, but is this essential to the achievement of your objectives? Have you ever wondered how you can simulate attacks and detections within a controlled environment to validate the configuration of your detection systems without spending part of your annual security budget? Read on and discover Splunk Attack Range.

What is Splunk Attack Range?

Splunk Attack Range is a tool developed by Splunk Threat Research Team (STRT) to simulate cyber attacks in a controlled environment for the purpose of improving an organization’s security posture. It allows security teams to test and validate their detection and response capabilities against a wide range of attack scenarios and techniques, such as phishing, malware infections, lateral movement, and data exfiltration.

Splunk Attack Range is designed to work with Splunk Enterprise Security, which is a security information and event management (SIEM) solution, and includes pre-built attack scenarios that are aligned with the MITRE ATT&CK framework, these ones can be customized to simulate the specific threats and vulnerabilities that are relevant to an organization’s environment.

splunk attack range

Where can I get Attack Range?

The STRT and the Splunk community are maintaining the project in GitHub.

Is Splunk Attack Range Easy to Deploy?

Yes, it is really straightforward! You can deploy it locally (if you have a powerful machine), on Azure or on AWS. Internally, we use our AWS environment and with a few simple steps, in a matter of minutes, terraform and ansible automatically deploy a complete test lab to validate our customers’ security configurations and optimize the security posture with Splunk’s real-time monitoring. This process allows for a proactive approach to managing security postures with Splunk and saves a lot of time for your Blue Team.

…and now?

Have fun! By merging our Splunk expertise and using these kinds of automation tools, we have been able to speed up our internal testing processes, stay agile and secure with Splunk’s security posture management tool, and transfer this knowledge and configurations on to our customers’ cybersecurity teams.

We strongly encourage you to try this tool. Check out an overview of v1.0, v2.0 and v3.0 in the Splunk blog.


Looking to expedite your success with Splunk Attack Range? Click here to view our Splunk Professional Service offerings.

Splunk Professional Services Partner

© Discovered Intelligence Inc., 2023. Unauthorised use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Help Getting Started with Cribl Stream

Getting Started With Cribl

Once you have embraced and grasped the power of Cribl Stream, “Reduce! Simplify!” will become your new mantra.

Here we list some of the best Cribl Stream resources available to get you started. Most of these resources are completely free! – money is not an obstacle when beginning your Cribl Stream journey, so keep reading and start learning today!

Read more

Splunk Deployment Server: The Manager of Managers

Deploying apps to forwarders using the Deployment Server is a pretty commonplace use case and is well documented in Splunk Docs. However, it is possible to take this a step further and use it for distribution of apps to the staging directories of management components like cluster manager or a search head cluster deployer, from where apps can then be pushed out to clustered indexers or search heads.

Read more

Get Excited About The Splunk Cloud ACS CLI

Splunk Cloud Admins rejoice! The Splunk Cloud ACS Command Line Interface is here! Originally, the Splunk Cloud Admin Config Service (ACS) was released in January 2021 to provide various self-service features for Splunk Cloud Admins. It was released as an API-based service that can be used for configuring IP allow lists, configuring outbound ports, managing HEC tokens, and many more which are all detailed in the Splunk ACS Documentation.

To our excitement Splunk has recently released a CLI version of ACS. The ACS CLI is much easier to use and less error-prone compared to the complex curl commands or Postman setup one has had to deal with to-date. One big advantage we see with the ACS CLI is how it can be used in scripted approach or within a deployment CI/CD pipeline to handle application management and index management.

We would recommend that you first refer to the ACS Compatibility Matrix to understand what features are available to the Classic and Victoria experience Splunk Cloud platforms.

ACS CLI Setup Requirements

Before you get started with the ACS CLI there are a few requirements to be aware of:

  • You must have the sc_admin role to be able to leverage the ACS CLI.
  • You must be running a Mac or Linux operating systems. However, if you are a Windows user you can use the Windows Subsystem for Linux (WSL), or any Linux VM running on Windows, to install and use the ACS CLI.
  • The Splunk Cloud version you are interacting with must be above 8.2.2109 to use the ACS CLI. To use Application Management functions, your Splunk Cloud version must be 8.2.2112 or greater.

Please refer to the Splunk ACS CLI documentation for further information regarding the requirements and the setup process.

ACS CLI Logging

At the time of authoring this blog, logging and auditing of interactions through the Splunk Cloud ACS is not readily available to customers. However, when using the ACS CLI it will create a local log on the system where it is being used. It is recommended that any administrators given access to work with the ACS CLI have the log file listed below collected and forwarded to the their Splunk Cloud stack. This log file can be collected using the Splunk Universal Forwarder, or other mechanism, to create an audit trail of activities.

  • Linux: $HOME/.acs/logs/acs.log
  • Mac: $HOME/Library/Logs/acs/acs.log

The acs.log allows an administrator to understand what operations were run, request IDs, status codes and much more. We will keep an eye out for Splunk adding to the logging and auditing functionality not just in the ACS CLI but ACS as a whole and provide a future blog post on the topic when available.

Interacting With The ACS CLI

Below are examples of common interactions an administrator might have with Splunk Cloud now done by leveraging the Splunk Cloud ACS CLI. There are many more self-service features supported by the ACS CLI, details of the supported features and CLI operations are available in the Splunk Cloud ACS CLI documentation

Application Management

One of the most exciting features of the ACS CLI is the ability to control all aspects of application management. That means, using the ACS CLI you can install both private applications and Splunkbase applications.

The command is easy to understand and straightforward, for both private and Splunkbase applications it supports commands to install, uninstall, describe applications within your environment as well as a list command to return a complete list of all installed applications, with their configurations. Specific to only Splunkbase applications there is an update command which allows you to, you guessed it, update the application to the latest version published and available.

For both private and Splunkbase apps, when running a command it will prompt you to enter your splunk.com credentials. You can pass –username –password parameters along with the command to avoid prompting for credentials. For private apps these credentials will be used to authenticate to AppInspect for application vetting.

Application Management: Installing a Private App

Let’s look at how we use the ACS CLI to install a private application. The following command will install a private app named company_test_app:

acs apps install private --acs-legal-ack Y --app-package /tmp/company_test_app.tgz

Now when a private app is installed using the ACS CLI it will automatically be submitted to AppInspect for vetting. A successful execution of the command will result in the following response, which you will note includes the AppInspect summary:

Submitted app for inspection (requestId='*******-****-****-****-************')
Waiting for inspection to finish...
processing..
success
Vetting completed, summary:
{
    "error": 0,
    "failure": 0,
    "skipped": 0,
    "manual_check": 0,
    "not_applicable": 56,
    "warning": 1,
    "success": 161
}
Vetting successful
Installing the app...
{
    "appID": "company_test_app",
    "label": "Company Test App",
    "name": "company_test_app",
    "status": "installed",
    "version": "1.0.0"
}
Application Management: Installing a Splunkbase Application

Let’s now look at an example of installing a Splunkbase application by running a command to install the Config Quest application:

acs apps install splunkbase --splunkbase-id 3696 --acs-licensing-ack http://creativecommons.org/licenses/by/3.0/

The licensing URL passed as a parameter in the command above can be found in the application details on Splunkbase. Additionally, by running a curl command the licensing URL can be retrieved from the Splunkbase API:

curl -s --location --request GET 'https://splunkbase.splunk.com/api/v1/app/3696' --header 'Content-Type: text/plain' | jq .license_url

Finally, a successful execution of the command will result in the following response:

Installing the app...
{
    "appID": "config_quest",
    "label": "Config Quest",
    "name": "config_quest",
    "splunkbaseID": "3696",
    "status": "installed",
    "version": "3.0.2"
}
Index Management

Index management using the ACS CLI supports a wide range of functionality. The supported commands allow you to create, update, delete and describe an index within your environment as well as a list command to return a list of all of the existing indexes, with their configurations.

Let’s now look at how we run one of these commands by running a command that creates a metrics index with 90 days searchable retention period. Note that ACS supports creating either event or metrics index, however it does not yet support configuring DDAA or DDSS.

acs indexes create --name scratch_01 --data-type metric --searchable-days 90

Finally, a successful execution of the command will return the following JSON response:

{
    "name": "scratch_01",

    "datatype": "metric",
    "searchableDays": 90,
    "maxDataSizeMB": 0,
    "totalEventCount": "0",
    "totalRawSizeMB": "0"
}
HEC Token Management

Managing HTTP Event Collector (HEC) token’s just got real easy. The ACS CLI supports commands to create, update, delete and describe a HEC token within your environment as well as a list command to return a list of all of the existing HEC token’s, with their configurations.

Let’s now look at how we run one of these commands by running a command to create a HEC token in Splunk Cloud quickly and easily:

acs hec-token create --name test_token --default-index main --default-source-type test

A successful execution of the command provides the token value in the JSON response:

{
    "http-event-collector": {
        "spec": {
            "allowedIndexes": null,
            "defaultHost": "************.splunkcloud.com",
            "defaultIndex": "main",
            "defaultSource": "",
            "defaultSourcetype": "test",
            "disabled": false,
            "name": "test_token",
            "useAck": false        },
        "token": "**********************"
    }
}

Looking to expedite your success with Splunk? Click here to view our Splunk Professional Service offerings.

© Discovered Intelligence Inc., 2022. Unauthorized use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Moving bits around: Automate Deployment Server Administration with GitHub Actions

Planning a sequel to the blog –  Moving bits around: Deploying Splunk Apps with Github Actions – led me to an interesting experiment. What if we could manage and automate the deployment server the same way, without having to log on to the server at all. After all, the deployment server is just a bunch of app directories and a serverclass.conf file.

Read more

Reducing Outlier Noise in Splunk

This blog is a continuation of the blog “Using Density Function for Advanced Outlier Detection“. Given the unique but complementary topics of the previous blog and present one, we decided to separate them. This blog describes a single approach to dealing with excess noise in outliers detection use-cases. While multiple methods of reducing noise exist, this is one that has worked (at least in my experience) at multiple projects throughout the Splunk-verse to reduce outliers noise.

Multi-Tier Approach to Reducing Noise

Adding to the plethora of existing noise reductions techniques in the alert management space. We’ve use a multi-tiered approach to find outliers at an entity, system and organization level. Once implemented, we can correlate outliers at each stage to answer the one of the biggest questions in outlier detection – ‘Was this timeframe a true outlier?’. In this section we will discuss the theory of reducing outliers with some visual aide to explain our concept.

There are three tiers we can general look at for outliers when investigating and outlier use case. These tiers in my opinion can be classified into entity-level, system-level, aggregate level. In each of these tiers, we can utilize density function or any other methods such as LocalOutlier, Moving Averages and Quartile ranges to find timeframes that stood. Once the timeframes have been detected, we correlation with each tier to determine when did the outlier occur.

For clarity, the visual below shows what a 3-tier approach might look like. From the ground-up we start looking at outliers from an entity-level, at the second stage we look look at a group that can identify a collection of entities. These collection of entities could be AD Groups, business units, network zones and much more.

This shape does not have to be a pyramid but represents the general # of outliers at each tier
Hierarchy of Mutli-Tier Approach

Combining Outliers in a Multi-Tier Approach

After determining the the outlier method at each tier, our next step is to correlate and combe the outliers. Its important in the planning phase to find a common field across all tiers. I would recommend using “_time” in 15 or 30 minute buckets as the common field. Our outlier detection process will end up looking similar to the visual below, where each level has its unique search running and outputs a list of outliers based on ‘_time’ as the common field. The split_by fields can be different at each tier, this will allow us to find out which entity as part of a system or aggregate group was marked as an outlier at a certain time.

If any user, group or count is an outlier, we will assign that time bucket a score of 1.
Multi-tier Outlier Process

After running the outlier detection searches, we can priortize outliers based off a tally or ranking system. Observe the tables on the right side in the picture above. Each timeframe is either a 1 or 0 if it was detected as outlier. ML algorithms automatically assign the is_outliers a value of ‘1’. For other methods we may have to manually give it the value of 1. Lets add up all the outlier count based on the timeframe.

Timeframeoutlier_count
11-02-2022 02:00:003 (high priority)
11-02-2022 17:40:002 (mid or low priority)
01-02-2022 13:30:000 (not outlier)
Total Count of Outliers

Adding the outlier count for each timeframe in each level will give us an idea on what we should emphasize on. Timeframes with the max 3 out of 3 outliers should take precedence in our investigations over timeframes that have 2 out of the possible 3 outlier count.

Conclusion

In the field, I’ve encountered many area’s where we have needed to adjust the thresholds and also find a way to reduce or analyze the outlier result. In doing so, a multi-tier approach has worked in some of the following specific scenarios:

  • Multi-tier data is available
  • Adjusting single outlier function (such as density function) captures too much or too little
  • Investigation into outlier leads to correlating if another feature/data source had outliers at a specific time

This can be complex to set-up, however one set-up its a repetitive process that can be applied to many use-cases that use outlier or anomaly detection.