Introducing the Update Cribl Lookup App for Splunk

We’re excited to announce the public availability of our Update Cribl Lookup app for Splunk, a new integration that sends results from Splunk searches directly to lookups in Cribl Cloud. 

In Cribl Stream, lookups are often a key part of enrichment, filtering, and routing decisions, which means keeping them current can have a direct impact on how data is processed downstream.

Traditionally, maintaining Cribl Stream lookups can become a separate operational task: export data, reformat it, upload it, validate it, and then deploy it to the right worker group. The Update Cribl Lookup app removes that friction by letting teams use the searches they already run in Splunk to update Cribl lookups directly, either interactively in SPL or automatically through alerting workflows.

Why we built it

Many of our customers already use Splunk as the place where useful operational and security context comes together. That context might include threat indicators, suspicious IPs, user access patterns, asset inventories, allow or deny lists, or dynamically generated reference data that would be even more valuable if it could immediately influence processing in Cribl.

This app was built to close that gap. Instead of treating Splunk as the system where data is only analyzed after the fact, the app makes it possible to take the result of a search and push it back into the data pipeline by updating a Cribl lookup that Stream can use right away.

What the app does

The app gives you two ways to update Cribl lookups from Splunk search results.

  • A custom streaming command, | updatecribllookup, for on-demand and scheduled SPL-driven workflows.
  • A modular alert action, “Update Cribl Lookup,” for automatically updating a lookup when an alert triggers

Both paths use the same back end, so you can test a workflow interactively in search first and then operationalize the same logic as an alert action.

How it works

The app takes search results from Splunk, validates the selected Cribl configuration, authenticates to Cribl Cloud using OAuth credentials, converts the results into CSV, uploads that data to the target lookup, and then deploys the change to the selected worker group.

The app supports multiple Cribl environments, worker groups and lookup definitions, managed through the configuration page.  

Key Capabilities

  • Supports both a search command and alert action, giving flexibility for ad hoc, scheduled and event-driven workflow
  • Tested on on-prem and Cribl Cloud environments
  • Works with both memory and disk-based Cribl lookups.
  • Tested with disk-based lookups as large as 500MB
  • Validates configuration before execution, reducing failed runs caused by missing worker groups, disabled lookups, or invalid parameters.
  • Uses OAuth 2.0 authentication for Cribl Cloud and stores secrets securely in Splunk’s credential store
  • Provides detailed logging for operations, errors, and debugging through updatecribllookup.log and Splunk internal logging workflows

Example Use Cases

This app is useful anywhere Splunk can produce a dataset that should become operational reference data in Cribl

  • Security teams can update a lookup of active threat indicators from high-severity detections, allowing Cribl to enrich or route matching events immediately.
  • Access monitoring teams can maintain lists of suspicious users or source IPs based on failed login activity detected in Splunk
  • Operations teams can sync dynamic inventories, ownership mappings, or application reference data into Cribl to improve downstream enrichment and routing.

For example, a search identifying recently observed threat indicators can feed an activethreats.csv lookup in a security worker group, while a failed-login detection can maintain a suspicioususers.csv for downstream handling. 

Using the streaming command

Syntax

... | updatecribllookup workergroup=<string> lookup=<string> [lookupmode=<string>] [commitmessage=<string>]

Arguments

workergroup=<string>Specifies the Cribl worker group configuration to use for the lookup update. This value must match a worker group defined and enabled in the app’s configuration.

Use cribl_default when you want to target Cribl’s default worker group, 

If the worker group is missing, disabled, or misspelled, validation will fail before the update runs.
lookup=<string>Specifies the name of the lookup file to update in Cribl. This should match a lookup definition that has been configured and enabled in the app.
The value should be a CSV filename such as activethreats.csv. The target lookup must exist in the intended Cribl environment.
lookupmode=<string>Controls how the lookup should be handled in Cribl. Supported values are auto (default), memory, and disk.
commitmessage=<string>Specifies an optional custom git commit message for the Cribl deployment triggered by the update. This can be useful for tracking why a lookup was updated or associating a deployment with a search or workflow.
If no commit message is provided, the app uses a default commit message identifying the search name, SID, and lookup name.

Examples

Basic update

… | updatecribllookup workergroup=cribl_default lookup=activethreats.csv

This sends the search results to the activethreats.csv lookup in the specified worker group.

Explicit lookup mode

… | updatecribllookup workergroup=security lookup=activethreats.csv lookupmode=memory

This updates the target lookup and explicitly sets the lookup mode to memory.​
With commit message

With commit message

… | updatecribllookup workergroup=syslog_prod lookup=cmdb_enrichment.csv commitmessage="Refreshing CMDB enrichment data"

This updates the lookup and includes a custom deployment message.

Using the alert action

The alert action makes the same capability available with convenient drop down selection for workergroup and lookup name.   It also adds Splunk alert triggering rules (for instance, only trigger when event count > 0). 

When creating the alert, a form is presented to set the parameters

using alert action

The workflow in the back end that connects, updates and commits the lookup are the same as for the search command. 

Download today from Splunkbase

UpdateCriblLookup is available right now from Github.  

Documentation can be found at: https://github.com/DiscoveredIntelligence/update_cribl_lookup_app_for_splunk/blob/main/README.md

For support, to request feature enhancements or to give us your feedback – Contact us at support@discoveredintelligence.ca.

Introducing the Cribl Search App for Splunk

We’re excited to announce the public availability of our Cribl Search App for Splunk, an integration that lets you query data via Cribl Search—directly from the Splunk search interface.

Whether you’re hunting for threats in long-term archives or reporting on a high-volume API that may not be indexed, this app allows you to bring the results back into Splunk as standard events without the requirement to index. No more switching tabs; no need for “rehydration” of data from Cribl to be able to use it in Splunk searches. 

The Cribl Search App for Splunk introduces a custom generating command, | criblsearch, to your Splunk environment.  It sends your Cribl Query request to Cribl Search and streams the results back into your Splunk search pipeline.

Once the data hits Splunk, you can treat it just like any other event in SPL. You can pipe it into stats, eval, outputlookup, use on your favourite dashboards, or write it to an index with collect.

Core Features

  • Multi-Endpoint Control: Search multiple Cribl Search environments.
  • Enterprise Auth:  Authenticates to Cribl Cloud using OAuth and securely stores Credentials using Splunk Secure Credential storage
  • Any Splunk compatible: Built to meet Splunk Cloud app vetting standards for seamless installation in both on-prem and cloud Splunk environments.

Cribl Search: A Primer

Cribl search allows you to search data where it lives.  It can search data from many sources including: Cribl Lake, Cribl Edge, Amazon Security Lake, Amazon S3, Azure Blob Storage, Azure Data Explorer, Google Cloud Storage, Elasticsearch, Opensearch, Prometheus, Snowflake, ClickHouse, and data from quite a few APIs (AWS, Azure, GCP, Google Workspace, Microsoft Graph, Okta, Tailscale, Zoom, and a Generic http API data source provider that allows you to search ones not already covered)

The benefits of Cribl Search are:

  • Slash Costs: Access “low-value” logs in cheap object storage (S3). Search them only when you need them.
  • Instant Visibility: Access logs where they reside, no requirement to move or store them elsewhere.
  • Zero Infrastructure Bloat: Scale your search capabilities without adding more hardware.

Cribl Search documentation can be found here.

Example Use Cases

1. Incident Response: Finding the initial compromise from long-term storage

The Challenge: An alert triggers today, but the compromise started 45 days ago. Data in Splunk is set to age out at 30 days,  so those logs were moved to cold storage.
The Solution: Pivot instantly to your S3 archive using Cribl Search directly in Splunk:


| criblsearch query="dataset:'firewall_archive' latest=-30d src_ip=='192.0.2.50' dest_ip=='27.133.154.218'"
| stats count by action, dst_port
| where action!="Blocked"

Impact: Get your full forensic timeline in a few minutes, not hours of manual data recovery, and no need to go into Cribl to set up a rehydration job for these events to be available.

2. High-Volume, Low-Value Logs

The Challenge: Your API generates 5TB of “200 OK” logs daily. Indexing them may not be valuable, but you need them for monthly compliance reports.
The Solution: Run the audit search across your data lake using Cribl Search and bring only the summary data needed for the report back to Splunk:


| criblsearch query="dataset:'api_logs' | where response_time > 5000 | summarize avg(response_time) AS avg_latency by endpoint" 
| table avg_latency endpoint
| outputlookup monthly_api_report.csv

Impact: 100% visibility for 0% additional indexing cost.

3. Cross-Cloud Correlation (The “Power Join”)

The Challenge: You suspect a credential spray attack hitting both AWS and Azure, but the logs live elsewhere.
The Solution: Use Splunk to join results from the two datasets accessible via Cribl Search:


| criblsearch query="dataset:'aws_cloudtrail' event=='ConsoleLogin'"
| rename sourceIPAddress AS src_ip, userIdentity.principalId AS user
| append [ 
    | criblsearch query="dataset:'azure_audit' event=='SignInActivity'"
    | rename ipAddress AS src_ip, userPrincipalName AS user
  ]
| stats count values(user) by source_ip
| where count > 5

Impact: Multi-cloud threat hunting from a single search bar.

Get Started

  1. Install: Download the Cribl Search App for Splunk from Github. Install the app to your Splunk Search Head or Search Head Cluster.
  2. Connect: Enter your Cribl Cloud credentials on the configuration page
  3. Search: Start your first query with | criblsearch query="..." and see your data lake come to life.

Are you ready to unlock your data?

Download the App on Github

View the Documentation

Ditch the Deployment Server: Why We Used Ansible for Splunk in a Secure OT Environment

Have you ever tried to manage a net-new Splunk deployment across dozens of isolated gas plants while staring down an aggressive six-week deadline?

We recently partnered with a major gas extraction company to do exactly that. In their highly secure Industrial Control Systems (ICS) and Operational Technology (OT) environments, you can’t just “hope” your configurations stick; you need a process that is repeatable, version-controlled, and bulletproof.

When the network is locked down tighter than a bank vault, standard Splunk config-management doesn’t just not work — it becomes a security risk. Here is why we moved away from a traditional Splunk Deployment Server setup and leaned into Ansible to get the job done.

OT Challenge: Navigating the Purdue Model

Managing data in a standard IT environment is (mostly) straightforward. But our customers’ environment follows the Purdue Model—a network architecture of increasingly secured rings designed to protect critical assets like pumps, manufacturing tools, and sensors.

While the Purdue Model is great for security, it’s a bit of a nightmare for traditional Splunk management. Level 1 and 2 are incredibly locked down. Using Splunk Deployment Server (DS) would require punching holes in firewalls to allow forwarders to “phone home” for updates.  This is forbidden.

We faced a choice: introduce a new management technology that might trigger security red flags, or leverage the tool already in place. Since the customer already had Ansible “plumbed” into those secure OT layers for other tasks, it became our tool of choice for orchestration.

Why Infrastructure as Code (IaC)

When you’re onboarding nearly hundreds of GB of data per day across network devices, servers and appliances, manual configuration is a recipe for disaster.  We’ve all seen “configuration drift”—that slow, silent divergence where systems move away from standard configurations over time.

By using Ansible, we gained three critical advantages:

  1. Idempotency: We can run the same playbook ten times, and it will only make changes if the target state isn’t met. No accidental overwrites.
  2. Cross-Platform Consistency: We used the same playbook logic for both Linux and Windows hosts; the automation handled the heavy lifting.
  3. Tag-Based Flexibility: We utilized Ansible tags (like site14 or windows_uf) to handle different physical locations and server roles without needing separate “Server Classes” for every tiny variation.

Mapping Splunk Concepts to Ansible

If you’re comfortable with Splunk, the jump to Ansible is shorter than you think. We essentially re-mapped familiar Splunk architecture to Ansible equivalents:

Splunk ConceptAnsible ImplementationDescription
Deployment ServerAnsible Control NodeThe central “source of truth” running our playbooks.
Deployment ClientInventory HostEach forwarder (UF/IF) is defined in a YAML inventory file.
Server ClassesHost Tags and GroupsWe use tags like linux or uf to target specific systems.
Deployment AppsRoles & Files StructureApps are managed in Git and pushed to targets via playbooks.

What We Learned

Even with the best automation, a tight six-week turnaround like this had its “gotchas.” Here are two lessons that could save you time on your next project:

1. Splunk ARI and GDI Dependency

We were tasked with setting up Splunk Asset and Risk Intelligence (ARI). A key lesson: don’t start the ARI “polish” until the Getting Data In (GDI) is 100% finished. ARI relies entirely on the quality and consistency of your data inputs. If you’re still tweaking data inputs a week before the project ends, ARI dashboards can break. Finish the data onboarding first; the intelligence layer comes second.

2. Permissions and Ownership

Automation is only as good as its permissions. For Linux targets, we had to ensure a Splunk user was consistently defined across all sites to avoid ownership errors upon file delivery. On the Windows side, we found that using the local administrator account for the Ansible connection was the most reliable way to ensure the Splunk service could be restarted remotely after a configuration change.

Conclusion: Focus on the Plumbing

Building a massive Splunk environment in six weeks reaffirmed to us that agility requires automation. By replacing the traditional Deployment Server with an Ansible-driven process, we created a system that is secure enough for the Purdue Model and repeatable for future expansions.

Whether you’re dealing with isolated gas plants or a complex cloud-hybrid stack, having a version-controlled “source of truth” for your configurations is what can save the project.


Ready to modernize your Splunk environment? Contact Us to learn how our experts can help you automate your secure Splunk environment.

Discovered Intelligence Inc., 2026. Unauthorized use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Splunk Universal Forwarder Upgrades: From Manual Pain to Automated Gain

When was the last time you actually looked forward to upgrading your Splunk Universal Forwarders (UFs)? If you’re like most of the engineers we talk to, UFs are the last things to get touched. They’re usually stuck on the back burner because the sheer effort of touching hundreds—or thousands—of endpoints is incredibly tedious. While we focus our energy on keeping the core Splunk instances shiny and updated, the UF fleet often lingers several versions behind, creating a maintenance debt that only gets heavier over time. But what if we told you there’s finally a native way to solve this headache? 

The “Back Burner” Dilemma: Why UFs Are So Hard

In the past, we’ve really only had three ways to handle these upgrades: manual, scripted, or through external automation platforms like Ansible or SCCM. If you’re a smaller shop, you’re likely doing manual installs, which means an engineer has to remotely access or physically touch every single box. Even if you’re a bit more mature and use scripts, it’s still a fragmented process.

The largest, most “mature” customers have already moved to heavy-duty automation platforms to manage their fleet, and they’ve built their own processes for this. But for everyone else—the folks relying on manual or basic scripted processes—Splunk didn’t have a native solution. Until now.

The Splunk Remote Upgrader

The Splunk Remote Upgrader is a free, Splunk-supported tool available as two separate apps on Splunkbase – one for Linux and one for Windows. It’s designed to run right alongside your existing UF on the endpoint.

Essentially, it acts as a separate application that monitors a predetermined directory (usually under temp) for new installation packages. As soon as it sees a new package land in that directory, it takes over the installation process for you.

What Can It Actually Upgrade?

  • Target Versions: It can upgrade UFs to any version 9.0 or higher.
  • Starting Point: You can use this process if your current forwarder is at version 8.0 or higher.
  • Security First: It only supports signed UF packages. This is why the target must be 9+, as these versions include the necessary signature files for verification.
  • OS Support: Currently, available for Linux and Windows platforms.

The Deployment Process

The biggest point of confusion we see is the relationship between the Upgrader and the Forwarder package. Think of them as two distinct pieces of the same puzzle.

1. Initial Setup

You still have to do the “first mile” yourself. You need to get the Remote Upgrader installed on the endpoint machine manually or through your existing external tools first. Once that Remote Upgrader daemon is running, it starts its “watch” on the /tmp/SPLUNK_UPDATER_MONITORED_DIR/ folder.

2. Preparing the Package

On your Deployment Server, you’ll prepare a package that contains the new UF version you want to deploy, along with its signature (.sig) file.

3. Execution and Monitoring

When you push this application via the Deployment Server, the UF pulls it down. The package contains a script that copies the new files over to the temp directory the Upgrader is monitoring.

Once the Upgrader detects those files, the real work begins:

  • Three Strikes Rule: The Upgrader will try the installation up to three times if it fails.
  • Timeout Safety: If an attempt gets stuck for more than five minutes, it gives up on that attempt.
  • The Safety Net: If all attempts fail, it triggers an automatic rollback to your previous version. It even keeps a backup of your old configuration for 30 days by default, just in case.

Ready to finally tackle that fleet of 500 forwarders? It’s not just about the convenience; it’s about the peace of mind knowing you have a centralized, logged, and recoverable way to stay current.

Real-World Considerations and Constraints

While we’re big fans of this new tool, we have to stay grounded in reality. It’s not a “set it and forget it” magic wand for every scenario.

  • Initial Effort: As we mentioned, the very first install of the Upgrader must be manual. However, once it’s there, the Upgrader can actually upgrade itself automatically in the future.
  • Storage Requirements: You need at least 1GB of free space on the endpoint to handle the packages and the backups.
  • Deployment Server Strategy: If you have a massive environment, you probably don’t want to hit 1,000 servers at once. You’ll need to be creative with your Server Classes to roll out the upgrades in waves.
  • Windows Requirements: For those of you on Windows, make sure PowerShell scripting is enabled, as the process relies on it to function.

Conclusion

By adopting the Splunk Remote Upgrader, we’re moving away from the era of “neglected forwarders” and into a world of centralized, secure lifecycle management. It reduces maintenance overhead, ensures your fleet is consistent with the latest security patches, and lets you adopt new features faster than ever before. It might take a bit of initial legwork to get the Upgrader daemon onto your hosts, but the long-term payoff for your operations and security posture is massive.


Need help? If you need help architecting a massive UF rollout, contact us today – we’d love to help you streamline your data pipeline.

Discovered Intelligence Inc., 2026. Unauthorized use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Finding Asset and Identity Risk with Splunk Asset and Risk Intelligence

Splunk Asset and Risk Intelligence (Splunk ARI) discovers and reports on risks affecting assets and identities. This risk discovery is performed in real-time, ensuring that risks can be quickly addressed, helping to limit exposure and increase overall security posture. In this post, we highlight three use cases related to asset risk using Splunk ARI.

Read more

Reveal Asset and Identity Activity with Splunk Asset and Risk Intelligence

Splunk Asset and Risk Intelligence (Splunk ARI) keeps track asset and identity discovery activity over time. This activity supports investigations into who had what asset and when, in addition to providing insights about asset changes over time and when they were first or last discovered. In this post, we highlight three use cases related to asset activity using Splunk ARI.

Read more

Investigating Assets and Identities with Splunk Asset and Risk Intelligence

Splunk Asset and Risk Intelligence (Splunk ARI) has powerful asset and identity investigative capabilities. Investigations help to reveal the full asset record, cybersecurity control gaps and any associated activity. In this post, we highlight three use cases related to asset investigations using Splunk ARI.

Read more

Discovering Assets and Identities with Splunk Asset and Risk Intelligence

Splunk Asset and Risk Intelligence (Splunk ARI) continually discovers assets and identities. It does this using a patented approach that correlates data across mulitple sources in real-time. In this post, we highlight three use cases related to asset discovery using Splunk ARI.

Read more

Field Filters 101: The Basics You Need to Know

Hello, Field Filters!

Data protection is a critical priority for any organization, especially when dealing with sensitive information like personal identifiable information (PII) and protected health information (PHI) data. Implementing robust protection mechanisms not only ensures compliance with regulations like the General Data Protection Regulation (GDPR) but also mitigates the risk of data breaches. 

Read more

Help Getting Started with Splunk Asset and Risk Intelligence (ARI)

With the recent release of Splunk Asset and Risk Intelligence (ARI), you may be looking for a better understanding of this great new solution and how you may get started. We have compiled a list of materials and resources you can use to help achieve this goal.

Read and Learn

Product overviews and briefs

If this is your first time reading up on Splunk Asset and Risk Intelligence, check these out first:

> Our Splunk Asset and Risk Intelligence overview
> Splunk Asset and Risk Intelligence web page
> Splunk Asset and Risk Intelligence Product Brief
> Splunk Asset and Risk Intelligence Technical Brief

Splunk Asset and Identity Intelligence E-book

Splunk has published an essential guide, which outlines several use cases to explore.

> Essential Guide to Continuous Asset and Identity Intelligence

Blog posts

Get a quick look at the Splunk ARI interface with screen shots of the platform, along with information about its features and capabilities through the following blog posts:

> Introducing Splunk Asset and Risk Intelligence
> Asset Discovery with Splunk Asset and Risk Intelligence
> Asset Investigations with Splunk Asset and Risk Intelligence
> Asset Activity with Splunk Asset and Risk Intelligence
> Asset Risk with Splunk Asset and Risk Intelligence
> Continuous, and Compliant: Obtain Proactive Insights with Splunk Asset and Risk Intelligence

Watch and Interact

Videos

> Splunk Asset and Risk Intelligence Intro video

Tours

> Take the Splunk Asset and Risk Intelligence Guided Tour

Demos

> Book a demo with Discovered Intelligence

Help and Support

Splunk Answers

Get answers from the community

> Splunk Answers – ARI

Splunk Documentation

Get specific instructions for tasks within the Splunk ARI platform by reviewing the documentation:

> Splunk Asset and Risk Intelligence Documentation

Splunk ARI Professional Services

It is often quicker, easier and more cost effective to get the Splunk ARI experts in. Our award winning consultants are highly trained on Splunk ARI and will ensure your continued success.

> Splunk ARI Quick Start Program
> Splunk ARI Professional Services

Contact Us

Contact us today to find out more about Splunk Asset and Risk Intelligence and how we can help you be successful.


Looking to expedite your success with Splunk ARI? Contact us today to discuss and get started.

© Discovered Intelligence Inc., 2024. Unauthorized use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.