Introducing the benefits and features of Cribl Lake

April marked the beginning of a new era for Cribl with the introduction of Cribl Lake, which brings Cribl’s suite of products full circle in the realm of data management. In this post we dive a bit deeper into some of the benefits and features of Cribl Lake.

Read more

Deploying Cribl Workers in AWS ECS for Data Replay

Cribl Stream provides a flexible way of storing full-fidelity raw data into low-cost storage solutions like AWS S3 while sending a reduced/filtered/summarized version into Analytical Platforms for cost-effectiveness. In this blog post, I’ll walk you through setting up Cribl workers on AWS ECS and implementing dynamic auto scaling for seamless scale-out and scale-in as the demand fluctuates.

Read more

Dynamically Validate Splunk XML Dashboard Inputs

Ever felt a need to restrict Splunk searches on your XML dashboards based on a certain criterion? While the input types such as a dropdown or multiselect provide a controlled means of presenting a list of values to choose from, often the use of a text input is necessary to allow the user to manually enter a series of characters. For example, entering a specific src and/or a dest IP address to review communication from and/or to the IP address over a period.

Read more

Deployment Server Clustering – An Easier Way to Manage Splunk Forwarders

Are you having trouble managing thousands and thousands of Splunk forwarders? Until now, many organizations have typically used numerous deployment servers in a scaling configuration, to manage a massive number of forwarders.

However, to make life easier, Splunk has introduced a new feature called ‘Deployment Server Clustering’, which was introduced with Splunk version 9.2.

Read more

Simplifying SPL: A Beginner’s Guide to the Splunk AI Assistant

In today’s data-driven world, mastering the Splunk Search Processing Language (SPL) is essential for effective data analysis. However, for beginners, SPL can seem like a daunting language to learn. Enter the Splunk AI Assistant – a revolutionary tool designed to make SPL accessible to users of all levels of expertise.

Read more

Building a Unified View: Integrating Google Cloud Platform Events with Splunk

By: Carlos Moreno Buitrago and Anoop Ramachandran

In this blog we will talk about the processes and the options we have to collect the GCP events and we will see how to collect those in Splunk. In addition, we will even add integration with Cribl, as an optional step, in order to facilitate and optimize the process of information ingestion. After synthesizing all of this great information, you will have a great understanding of the available options to take, depending on the conditions of the project or team in which you work.

Read more

Wiring up the Splunk OpenTelemetry Collector for Kubernetes

Organizations of all sizes are building / migrating / refactoring their software to be cloud-native. Applications are broken down into microservices and deployed as containers. Consequently there has been a seismic shift in the complexity of application components thanks to the intricate network of microservices calling each other. The traditional sense of “monitoring” them no longer makes sense, especially because containers are ephemeral in nature and are treated as cattle, instead of as pets.

Read more

Interesting Splunk MLTK Features for Machine Learning (ML) Development

The Splunk Machine Learning Toolkit is packed with machine learning algorithms, new visualizations, web assistant and much more. This blog sheds light on some features and commands in Splunk Machine Learning Toolkit (MLTK) or Core Splunk Enterprise that are lesser known and will assist you in various steps of your model creation or development. With each new release of the Splunk or Splunk MLTK a catalog of new commands are available. I attempt to highlight commands that have helped in some data science or analytical use-cases in this blog.

Read more

Quick Guide to Outlier Detection in Splunk

There are multiple (almost discretely infinite) methods of outlier detection. In this blog I will highlight a few common and simple methods that do not require Splunk MLTK (Machine Learning Toolkit) and discuss visuals (that require the MLTK) that will complement  presentation of outliers in any scenario.  This blog will cover the widely accepted method of using averages and standard deviation for outlier detection. The visual aspect of detecting outliers using averages and standard deviation as a basis will be elevated by comparing the timeline visual against the custom Outliers Chart and a custom Splunk’s Punchcard Visual.

Some Key Concepts

Understanding some key concepts are essentials to any Outlier Detection framework. Before we jump into Splunk SPL (Search Processing Language)  there are basic ‘Need-to-know’ Math terminologies and definitions we need to highlight:

  • Outlier Detection Definition:  Outlier detection is a method of finding events or data that are different from the norm.
  • Average: Central value in set of data.
  • Standard Deviation: Measure of spread of data. The higher the Standard Deviation the larger the difference between data points. We will use the concept of standard substantially in today’s blog. To view the manual method of standard deviation calculation click here.
  • Time Series: Data ingested in regular intervals of time. Data ingested in Splunk with a timestamp and by using the correct ‘props.conf’ can be considered “Time Series” data   

Additionally, we will leverage aggregate and statistic Splunk commands in this blog. The 4 important commands to remember are:

  • Bin:  The ‘bin’ command puts numeric values (including time) into buckets. Subsequently the ‘timechart’ and ‘chart’ function use the bin command under the hood
  • Eventstats: Generates statistics (such as avg,max etc) and adds them in a new field. It is great for generating statistics on ‘ALL’ events
  • Streamstats: Similar to ‘stats’ , streamstats calculates statistics at the time the event is seen (as the name implies). This feature is undoubtedly useful to calculate ‘Moving Average’ in additional to ordering events
  • Stats: Calculates Aggregate Statistics such as count, distinct count, sum, avg over all the data points in a particular field(s)

Data Requirements

The data used in this blog is Splunk’s open sourced “Bots 2.0” dataset from 2017. To gain access to this data please click here. Downloading this data set is not important, any sample time series data that we would like to measure for outliers is valid for the purposes of this blog. For instance, we could measure outliers in megabytes going out of a network OR # of logins in a applications using the using the same type of Splunk query. The logic used to the determine outliers is highly reusable.

Using SPL

There are four methods commonly seen methods applied in the industry for basic outlier detection. They are in the sections below:

1. Using Static Values

The first commonly used method of determining an outlier is by constructing a flat threshold line. This is achieved by creating a static value and then using logic to determine if the value is above or below the threshold. The Splunk query to create this threshold is below :

<your spl base search> … | timechart span=6h sum(mb_out) as mb_out
| eval threshold=100 
| eval isOutlier=if('mb_out' > threshold, 1, 0)
Static threshold timeline visual
Static threshold timeline visual

2. Average with Static Multiplier

In addition to using arbitrary static value another method commonly used method of determining outliers, is a multiplier of the average. We calculate this by first calculating the average of your data, following by selecting a multiplier. This creates an upper boundary for your data. The Splunk query to create this threshold is below:

<your spl base search> …  
| timechart span=12h sum(mb_out) as mb_out 
| eventstats avg("mb_out") as average 
| eval threshold=average*2 
| eval isOutlier=if('mb_out' > threshold, 1, 0)
Average + Static threshold timeline visual
Average + Static threshold timeline visual

3. Average with Standard Deviation

Similar to the previous methods, now we use a multiplier of standard deviation to calculate outliers. This will result in a fixed upper and lower boundary for the duration of the timespan selected. The Splunk query to create this threshold is below:

<your spl base search> ... | timechart span=12h sum(mb_out) as mb_out 
 | eventstats avg("mb_out") as avg stdev("mb_out") as stdev 
 | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2))
 | eval isOutlier=if('mb_out' < lowerBound OR 'mb_out' > upperBound, 1, 0) 
2*Standard Deviation timeline visual
2*Standard Deviation timeline visual

Notice that with the addition of the lower and upper boundary lines the timeline chart becomes cluttered.

4. Moving Averages with Standard Deviation

In contrast to the previous methods, the 4th most common method seen is by calculating moving average. In short, we calculate the average of data points in groups and move in increments to calculate an average for the next group. Therefore, the resulting boundaries will be dynamic. The Splunk search to calculate this is below:

<your spl base search> ... | timechart span=12h sum(mb_out) as mb_out 
 | streamstats window=5 current=true avg("mb_out") as avg stdev("mb_out") as stdev
 | eval lowerBound=(avg-stdevexact(2)), upperBound=(avg+stdevexact(2)) 
 | eval isOutlier=if('mb_out' < lowerBound OR 'mb_out' > upperBound, 1, 0) 
Moving Average with Standard Deviation timeline chart
Moving Average with Standard Deviation timeline chart

Tips: Notice the “isOutliers” line in the timeline chart, in order to make smaller values more visible format the visual by changing the scale from linear to log format.

Using the MLTK Outlier Visualization

Splunk’s Machine Learning Toolkit (MLTK) contains many custom visualization that we can use to represent data in a meaningful way. Information on all MLTK visuals detailed in Splunk Docs. We will look specifically at the ‘Outliers Chart’. At the minimum the outlier chart requires 3 additional fields on top of your ‘_time’ & ‘field_value’. First, would need to create a binary field ‘isOutlier’ which carries the value of 1 or 0, indicating if the data point is an outlier or not. The second and third field are ‘lowerBound’ & ‘upperBound’ indicating the upper and lower thresholds of your data. Because the outliers chart trims down your data by displaying only the value of data point and your thresholds, we can conclude through use that it is clearer and easier to understand manner. As a recommendation it should be incorporated in your outliers detection analytics and visuals when available.

Continuing from the previous paragraph, take a look at the below snippets at how the impact the outliers chart is in comparison to the timeline chart. We re-created the same SPL but instead of applying timeline visual applied the ‘Outliers Chart’ in the same order:

Using and outliers chart to display outliers
Static threshold w outliers chart
Using outliers chart to display a static threshold (average * multiplier)
Average + Static threshold timeline visual
Using outliers chart to display 2*Standard Deviation outliers chart
2*Standard Deviation outliers chart
Using outliers chart for moving averages
Moving Average with Standard Deviation outliers chart
AdvantagesDisadvantages
Cleaner presentation and less clutterYou need to install Splunk MLTK (and its pre-requisites) to take advantage of the outliers chart
Easier to understand as determining the boundaries becomes intuitive vs figuring out which line is the upper or lower thresholdUnable to append additional fields in the Outliers chart

Adding Depth to your Outlier Detection

Determining the best technique of outlier detection can become a cumbersome task. Hence, having the right tools and knowledge will free up time for a Splunk Engineer to focus on other activities. Creating static thresholds over time for the past 24hrs, 7 days, 30 days may not be the best approach to finding outliers. A different way to measure outliers could be by looking at the trend on every Monday for the past month or 12 noon everyday for the past 30 days. We accomplish this by using two simple and useful eval functions:

| eval HourOfDay=strftime(_time, "%H") 
| eval DayOfWeek=strftime(_time, "%A") 

Using Eval Functions in SPL

Continuing from the previous section, we incorporate the two highlighted eval functions in our SPL to calculate the average ‘mb_out’. However, this time the average is based on the day of the week and the hour of the day. There are a handful of advantages of this method:

  • Extra depth of analysis by adding 2 additional fields you can split the data by
  • Intuitive method of understanding trends

Some use cases of using the eval functions are as follows:

  • Network activity analysis
  • User behaviour analysis
Calculate averages based on day of week and hour of day
Tables representing averages by DayOfWeek & HourOfDay

Visualizing the Data!

We will focus on two visualizations to complement our analysis when utilizing the eval functions. The first visual, discussed before, is the ‘Outliers Chart’ which is a custom visualization in Splunk MLTK. The second visual is another custom visualization ‘PunchCard’, it can be downloaded from Splunkbase here (https://splunkbase.splunk.com/app/3129/).

The outliers chart has a feature which results in a ‘swim lane’ view of a selected field/dimension and your data points while highlighting points that are outliers. To take advantage of this feature, we will use a Macro “splitby” which creates a hidden field(s) “_<Field(s) you want data to split by>”. The rest of the SPL is shown below

< your base SPL search >  ...  | eventstats avg("mb_out") as avg stdev("mb_out") as stdev  by "HourOfDay" 
| eval avg=round(avg,2) 
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) 
| eval isOutlier=if('mb_out' < lowerBound OR 'mb_out' > upperBound, 1, 0) 
| `splitby("HourOfDay")` 
| fields _time, "mb_out", lowerBound, upperBound, isOutlier, * 
| fields - _raw source kb* byt* 
| table _time "mb_out" lowerBound upperBound isOutlier *

This search results in an Outlier Chart that looks like this:

Outliers Chart split by hour of day
Outliers Chart split by hour of day

The Outliers Chart has the capability to split by multiple fields, however in our example splitting it by a single dimension “HourOfDay” is sufficient to show its usefulness.

The PunchCard visual is the second feature we will use to visualize outliers. It displays cyclical trends in our data by representing aggregated values of your data points over two dimensions or fields. In our example, I’ve calculated the sum of outliers over a month based on “DayOfWeek” as my first dimension and “HourOfDay” as my second dimension. I’ve adding the outliers of these two fields and displaying it using the PunchCart visual. The SPL and image for this visual is show below:

< your base SPL search > ... | streamstats window=10 current=true avg("mb_out") as avg stdev("mb_out") as stdev by "DayOfWeek" "HourOfDay"
| eval avg=round(avg,2)
| eval stdev=round(stdev,4)
| eval lowerBound=(avg-stdevexact(2)), upperBound=(avg+stdevexact(2))
| eval isOutlier=if('mb_out' < lowerBound OR 'mb_out' > upperBound, 1, 0)
| splitby("DayOfWeek","HourOfDay")
| stats sum(isOutlier) as mb_out by DayOfWeek HourOfDay
| table HourOfDay DayOfWeek mb_out
PunchCard Visual
PunchCard Visual

Summary and Wrap Up

Trying to find outliers using Machine Learning techniques can be a daunting task. However I hope that this blog gives an introduction on how you can accomplish that without using advanced algorithms. Consequently, using basic SPL and built-in statistic functions can result in visuals and analysis that is easier for stakeholders to understand and for the analyst to explain. So summarizing what we have learnt so far:

  1. One solution does not fit all. There are multiple methods of visualizing your analysis and exploring your result through different visual features should be encouraged
  2. Use Eval functions to calculate “DayOfWeek” and “HourOfDay” wherever and whenever possible. Adding these two functions provides a simple yet powerful tool for the analyst to explore the data with additional depth
  3. Trim or minimize the noise in your Outliers visual by using the Outliers Chart. The chart is beneficial in displaying only your boundaries and outliers in your data while shaving all other unnecessary lines
  4. Use “log” scale over “linear” scale when displaying data with extremely large ranges


Looking to expedite your success with Splunk? Click here to view our Splunk Professional Service offerings.

© Discovered Intelligence Inc., 2020. Unauthorised use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Harnessing Ingest-Time Eval Fields

Anyone who is familiar with writing search queries in Splunk would admit that eval is one of the most regularly used commands in their SPL toolkit. It’s up there in the league of stats, timechart, and table.

For the uninitiated, eval, just like in any other programming context, evaluates an expression and returns the result. In Splunk, especially when searching, holds the same meaning as well. It is arguably the Swiss Army knife among SPL commands as it lets you use an array of operations like mathematical, statistical, conditional, cryptographic, and text formatting operations to name a few.

Read more about eval here and eval functions here.

What is an Ingest-time Eval?

Until Splunk v7.1, the eval command was only limited to search time operations. Since the release of 7.2, eval has also been made available at index time. What this means is that all the eval functions can now be used to create fields when the data is being indexed – otherwise known as indexed fields. Indexed fields have always been around in Splunk but didn’t have the breadth of capabilities for populating them until now.

Ingest-time eval doesn’t overlap with other common index-time configurations such as data filtering and routing, but only complements it. It lets you enrich the event with fields that can be derived by applying the eval functions on existing data/fields in the event.

One key thing to note is that it doesn’t let you apply any transformation to the raw event data, like masking.

When to use Ingest-time eval

Ingest-time eval can be used in many different ways, such as:

  • Adding data enrichment such as a data center field based on a host naming convention
  • Normalizing fields such adding a field with a FQDN when the data only contains a hostname
  • Using additional fields used for filtering data before indexing
  • Performing common calculations such as adding a GB field when there is only a MB field or the length of a field with a string

Ingest-time eval can also be used with metrics. Read more here.

When not to use Ingest-time eval

Ingest-time eval, like index-time field extractions, adds a performance overhead on the indexers or heavy forwarders (whichever is handling the parsing of data based on your architecture) as they will be evaluated on all events of the specific sourcetypes you define it for. Since the new fields are going to be permanently added to the data as they are indexed, the increase in disk space utilization needs to be accounted for as well. Also there is no reverting these new fields as these are indexed/persisted in the index. To remove the data, the ingest-time eval configurations would need to be disabled/deleted and letting the affected data age out.

When using Ingest-time eval also consider the following:

  • Validate if the requirement is something that can be met by having an eval function at search time – usually this should be yes!
  • Always use a new field name that’s not part of the event data. There should be no conflict with the field name that Splunk automatically extracts with the `KV_MODE=auto` extraction.
  • Always ensure you are applying eval on _raw data unless you have some index time field extraction that’s configured ahead of it in the transforms.conf.

Always ensure that your indexers or heavy forwarders have adequately hardware provisioned to handle the extra load. If they are already performing at full throttle, adding an extra step of processing might be that final straw. Evaluate and upgrade your indexing tier specs first if needed.

Now, lets see it in action!

Here is an Example…

Lets assume for a brief moment you are working in Hollywood, with the tiny exception that you don’t get to have coffee with the stars but just work with their “PCI data”. Here’s a sample of the data we are working with. It’s a sample of purchase details that some of my favorite stars made overseas (Disclaimer: The PCI data is fake in case you get any ideas 😉):

2019-12-09 23:46:44,283 - name=Tom Hardy, amount=2620.08063223, currency=USD, dest_country=Tanzania, cc=8888192373782645, cvc=151
2019-12-09 23:46:45,284 - name=Ryan Reynolds, amount=4229.66241228, currency=USD, dest_country=Canada, cc=9999047123456789, cvc=101
2019-12-09 23:46:48,288 - name=Frances McDormund, amount=6033.83328530, currency=USD, dest_country=Budapest, cc=9999513562353615, cvc=856
2019-12-09 23:47:11,320 - name=Daniel Day-Lewis, amount=5603.00466255, currency=USD, dest_country=Iceland, cc=9999463984323578, cvc=029
2019-12-09 23:47:21,333 - name=Clint Eastwood, amount=8321.50139290, currency=USD, dest_country=Sri Lanka, cc=8888847290573791, cvc=347
2019-12-09 23:47:22,335 - name=Tom Hardy, amount=3773.86328145, currency=USD, dest_country=Tanzania, cc=8888192373782645, cvc=151
2019-12-09 23:47:23,336 - name=Jeff Goldblum, amount=9475.63602049, currency=USD, dest_country=Sri Lanka, cc=8888485176493782, cvc=730

Now we are going to create some ingest-time fields:

  1. Making the name to all upper case (just for the sake of it)
  2. Rounding off the amount to two decimal places
  3. Applying a bank field based on the starting four digit of the card number
  4. Applying md5 hashing on the card number
  5. Applying a mask to the card number

First things first, lets set up our props.conf for the data with all the recommended attributes defined. What really matters in our case here is the TRANSFORMS attribute.

[finlog]
SHOULD_LINEMERGE=false
LINE_BREAKER=([\r\n]+)
TRUNCATE=10000
TIME_FORMAT=%Y-%m-%d %H:%M:%S,%f
MAX_TIMESTAMP_LOOKAHEAD=25
TIME_PREFIX=^
TRANSFORMS = fineval1, fldext1, fineval2 # order of values for transforms matter

Now let’s define how the transforms.conf should look like. This essentially is the place where we define all our eval expressions. Each expression is comma separated.

[fineval1]
INGEST_EVAL= uname=upper(replace(_raw, ".+name=([\w\s'-]+),\stime.*","\1")), purchase_amount=round(tonumber(replace(_raw, ".+amount=([\d\.]+),\scurrency.*","\1")),2)
# notice how in each case we have to operate on _raw as name and amount fields are not index-time extracted.

[fldext1]
REGEX = .+cc=(\d{15,16})
FORMAT = cc::"$1"
WRITE_META = true

[fineval2]
# INGEST_EVAL= cc=md5(replace(_raw, ".+cc=(\d{15,16})","\1"))
# have commented above as we need not apply the eval to the _raw data. fldext1 here does index time field extraction so we can apply directly on the extracted field as below...
INGEST_EVAL= cc1=md5(cc), bank=case(substr(cc,0,4)=="9999","BNC",substr(cc,0,4)=="8888","XBS",1=1,"Others"), cc2=replace(cc, "(\d{4})\d{11,12}","\1xxxxxxxxxxxx")

All the above settings should be deployed to the indexer tier or heavy forwarders if that’s where the data is originating from.

A couple things to note – you can define your ingest-time eval in separate stanzas if you choose to define them separately in the props.conf. Below is a use case for that. Here I have defined an index time field extraction to extract the value of card number. Then in a separate stanza, I used another ingest-time eval stanza to process on that extracted field. This is a good use case of reusability of regex (instead of applying it on _raw repeatedly) in case you need to do more than one operations on specific set of fields.

Now we need to do a little extra work that’s not common with a search time transforms setting. We have to add all the new fields created above to fields.conf with the attribute INDEXED=true denoting these are index time fields. This should be done in the Search Head tier.

[cc1]
INDEXED=true

[cc2]
INDEXED=true

[uname]
INDEXED=true

[purchase_amount]
INDEXED=true

[bank]
INDEXED=true

The result looks like this:

One important note about implementing Ingest-time eval configurations, is that they require manual edits to .conf files as there is no Splunk web option for it. If you are a Splunk Cloud customer, you will need to work with Splunk support to deploy them to the correct locations depending on your architecture.

OK so that’s a quick overview of Ingest-time eval. Hope you now have a pretty fair understanding of how to use them.

Looking to expedite your success with Splunk? Click here to view our Splunk Professional Service offerings.

© Discovered Intelligence Inc., 2020. Unauthorised use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.