Ditch the Deployment Server: Why We Used Ansible for Splunk in a Secure OT Environment

Have you ever tried to manage a net-new Splunk deployment across dozens of isolated gas plants while staring down an aggressive six-week deadline?

We recently partnered with a major gas extraction company to do exactly that. In their highly secure Industrial Control Systems (ICS) and Operational Technology (OT) environments, you can’t just “hope” your configurations stick; you need a process that is repeatable, version-controlled, and bulletproof.

When the network is locked down tighter than a bank vault, standard Splunk config-management doesn’t just not work — it becomes a security risk. Here is why we moved away from a traditional Splunk Deployment Server setup and leaned into Ansible to get the job done.

OT Challenge: Navigating the Purdue Model

Managing data in a standard IT environment is (mostly) straightforward. But our customers’ environment follows the Purdue Model—a network architecture of increasingly secured rings designed to protect critical assets like pumps, manufacturing tools, and sensors.

While the Purdue Model is great for security, it’s a bit of a nightmare for traditional Splunk management. Level 1 and 2 are incredibly locked down. Using Splunk Deployment Server (DS) would require punching holes in firewalls to allow forwarders to “phone home” for updates.  This is forbidden.

We faced a choice: introduce a new management technology that might trigger security red flags, or leverage the tool already in place. Since the customer already had Ansible “plumbed” into those secure OT layers for other tasks, it became our tool of choice for orchestration.

Why Infrastructure as Code (IaC)

When you’re onboarding nearly hundreds of GB of data per day across network devices, servers and appliances, manual configuration is a recipe for disaster.  We’ve all seen “configuration drift”—that slow, silent divergence where systems move away from standard configurations over time.

By using Ansible, we gained three critical advantages:

  1. Idempotency: We can run the same playbook ten times, and it will only make changes if the target state isn’t met. No accidental overwrites.
  2. Cross-Platform Consistency: We used the same playbook logic for both Linux and Windows hosts; the automation handled the heavy lifting.
  3. Tag-Based Flexibility: We utilized Ansible tags (like site14 or windows_uf) to handle different physical locations and server roles without needing separate “Server Classes” for every tiny variation.

Mapping Splunk Concepts to Ansible

If you’re comfortable with Splunk, the jump to Ansible is shorter than you think. We essentially re-mapped familiar Splunk architecture to Ansible equivalents:

Splunk ConceptAnsible ImplementationDescription
Deployment ServerAnsible Control NodeThe central “source of truth” running our playbooks.
Deployment ClientInventory HostEach forwarder (UF/IF) is defined in a YAML inventory file.
Server ClassesHost Tags and GroupsWe use tags like linux or uf to target specific systems.
Deployment AppsRoles & Files StructureApps are managed in Git and pushed to targets via playbooks.

What We Learned

Even with the best automation, a tight six-week turnaround like this had its “gotchas.” Here are two lessons that could save you time on your next project:

1. Splunk ARI and GDI Dependency

We were tasked with setting up Splunk Asset and Risk Intelligence (ARI). A key lesson: don’t start the ARI “polish” until the Getting Data In (GDI) is 100% finished. ARI relies entirely on the quality and consistency of your data inputs. If you’re still tweaking data inputs a week before the project ends, ARI dashboards can break. Finish the data onboarding first; the intelligence layer comes second.

2. Permissions and Ownership

Automation is only as good as its permissions. For Linux targets, we had to ensure a Splunk user was consistently defined across all sites to avoid ownership errors upon file delivery. On the Windows side, we found that using the local administrator account for the Ansible connection was the most reliable way to ensure the Splunk service could be restarted remotely after a configuration change.

Conclusion: Focus on the Plumbing

Building a massive Splunk environment in six weeks reaffirmed to us that agility requires automation. By replacing the traditional Deployment Server with an Ansible-driven process, we created a system that is secure enough for the Purdue Model and repeatable for future expansions.

Whether you’re dealing with isolated gas plants or a complex cloud-hybrid stack, having a version-controlled “source of truth” for your configurations is what can save the project.


Ready to modernize your Splunk environment? Contact Us to learn how our experts can help you automate your secure Splunk environment.

Discovered Intelligence Inc., 2026. Unauthorized use and/or duplication of this material without express and written permission from this site’s owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Discovered Intelligence, with appropriate and specific direction (i.e. a linked URL) to this original content.

Splunk Asset and Risk Intelligence – a CAASM Solution for Splunk

Image credit: https://www.splunk.com/en_us/products/asset-and-risk-intelligence.html

At the recent Splunk .Conf in Las Vegas a couple of weeks ago, we were able to get a detailed demo of Splunk’s new and exciting Splunk Asset and Risk Intelligence (Splunk ARI) security solution. What a great solution and one that is much needed within their security solution portfolio. Splunk ARI falls into a category of products known as CAASM – Cyber Asset Attack Surface Management. In this post, we dive a little deeper into what CAASM is, why it is critical tool for your organization and how Splunk ARI can help.

Read more

Splunk Data Integration – Getting Data Out of Splunk

There are several ways of integrating Splunk within your environment or with your cloud service providers. In this post, we will outline some of the many methods you can use to get data out of Splunk. In a related post, we outline some of the many ways to get data into Splunk. Read more

Splunk Data Integration – Getting Data Into Splunk

There are several ways of integrating Splunk within your environment or with your cloud service providers. In this post we will outline some of the many methods you can use to get data into Splunk. In a related post, we will outline some of the many ways to get data out of Splunk. Read more

Maintaining Data Visibility across the Cloud and the Ground

The move to cloud provided services (cloud) has meant a significant shift in where data is generated and stored. No longer is all data generated within the internal boundaries of a company’s own network and data centre (ground). This presents a challenge when it comes to maintaining data visibility and intelligence gathering capabilities – especially from a security and risk perspective. In this post, we will examine several different high-level scenarios and the impact on data visibility of each. Read more

Don’t Neglect Big Data Integration

In the excitement to implement a big data platform like Splunk or Hadoop, many enterprises put data integration on the back-burner or figure it can be ‘worked out’, once the platform is in place. However, data integration is a key part of a successful big data intelligence strategy and must be given appropriate consideration. Read more

The Incredible Hunk – Splunk Analytics for Hadoop

Splunk recently announced a new offering, called Hunk. This is essentially a tool that allows for the exploration, analysis and visualisation of data in Hadoop, using the powerful Splunk interface and search engine common to their Splunk Enterprise offering. Read more