# Industrial Cyber Security Best Practices

1. [Home](https://www.agilicus.com/)
2. [White Papers](https://www.agilicus.com/white-papers/)
3. Industrial Cyber Security Best Practices

# Industrial Cyber Security Best Practices

![](https://www.agilicus.com/www/1641c6ff-security-best-practices-guide.webp)    First Name

Last Name

Email

Comment

Submit

At Agilicus, we recognise that the convergence of business networks and industrial control systems creates unprecedented hybrid environments. This document is a blueprint for anyone who operates both Information Technology and Operational Technology in a hybrid capacity. It is designed not as a static checklist, but as a strategic guide to govern your ongoing security investments.

This best practices guide is meant to be paired directly with our [Cyber Security Assessment Scorecard](/l/industrial-cyber-security-best-practices-scorecard/) as well as industry standards such as NIST SP 800-82r3. While the scorecard measures your current posture and highlights the gaps, this blueprint provides the architectural roadmap to address them. These are not binary tasks to complete once and forget; they are a set of foundational practices that must be continuously improved across the board.

Below we present 5 separate, orthogonal dimensions. Each is a unique vector to invest in reducing your cyber risk and increasing your controls. For each we present some best practice concepts that relate.

## Executive Summary

Table of Contents

- [Executive Summary](#executive-summary)
- [Boundary Defence](#boundary-defence)
- [Identity &amp; Credentials](#identity-credentials)
- [Lateral Movement](#lateral-movement)
- [System Hardening](#system-hardening)
- [Visibility &amp; Detection](#visibility-detection)
- [Appendix: Understanding the Purdue Model](#authoritative-references)
- [Appendix: Defence in Depth and the Swiss Cheese Model](#appendix-defence-in-depth-and-the-swiss-cheese-model)

At Agilicus, we recognise that the convergence of business networks and industrial control systems creates unprecedented hybrid environments. This document is a blueprint for anyone who operates both Information Technology and Operational Technology in a hybrid capacity. It is designed not as a static checklist, but as a strategic guide to govern your ongoing security investments.
The core philosophy underpinning this guide is [Defence in Depth](#defence-in-depth-appendix). We do not believe any single control point is absolute; every perimeter will eventually face a threat it was not designed to handle. Instead, we architect fallback defences based on the Swiss cheese model. Every defensive layer has inherent flaws: the holes in the [cheese](https://en.wikipedia.org/wiki/Swiss_cheese_model), but by stacking multiple, distinct layers, you ensure that a single point of failure does not cascade into a total systemic breach.

![](https://www.agilicus.com/www/5092f6cc-generated-image-april-14-2026-9_41pm-scaled.jpg)    Crucially, the practices outlined here are completely orthogonal. They control different risks using entirely different mechanisms. An adversary bypassing a network boundary does not automatically bypass an identity control, meaning an escape of one layer is not an escape of another.
This best practices guide is meant to be paired directly with our Cyber Security Assessment Scorecard as well as industry standards such as [NIST SP 800-82r3](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r3.pdf). While the scorecard measures your current posture and highlights the gaps, this blueprint provides the architectural roadmap to address them. These are not binary tasks to complete once and forget; they are a set of foundational practices that must be continuously improved across the board.
Below we present 5 separate, orthogonal dimensions. Each is a unique vector to invest in reducing your cyber risk and increasing your controls. For each we present some best practice concepts that relate.

## ***Boundary Defence***

Traditional perimeter security relied on the illusion of an air gap, but in modern operational technology environments, the air gap is no longer feasible. This dimension focuses on securing the multiple boundaries defined by the Purdue model (see [ISA99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99)) (e.g., from the outside world to the corporate information technology firewall, and from the information technology network into the operational technology network). It enforces the principle that external remote services and cloud connections must be fundamentally re-architected. Legacy virtual private networks and open inbound ports are obsolete; they must be replaced with outbound-only reverse tunnels and modern secure remote access.

![](https://www.agilicus.com/www/237ee549-image-1-2d99e6a4-c36d-411c-9100-a6e818d62fba-scaled.jpg)    ### *Multi-Layer Boundary Enforcement*

The proliferation of interconnected devices means that a flat network is an open invitation for total compromise. When corporate systems bleed into industrial control environments, a single compromised workstation on the business side easily cascades into a physical disruption on the plant floor. We must construct hard boundaries between Information Technology, Operational Technology, and external vendors. This structural separation forms the bedrock of defence in depth; an attacker who breaches the corporate perimeter still faces an entirely isolated, heavily inspected environment before they can touch a safety instrumented system.

A common model describing internal security boundaries is the Purdue model [(see Appendix: Understanding the Purdue Model)](#purdue-appendix).

- Define strict physical and logical separation between the corporate network and operational technology network.
- Mandate firewall inspection at every boundary crossing.
- Isolate safety instrumented systems from all other networks.
- Implement a demilitarised zone for shared services.
- Deny all direct routing between the internet and operational technology assets in both inbound and outbound direction.

### *Elimination of Inbound Ports*

Publicly exposed inbound ports are a liability that search engines and automated scanners actively exploit. Relying on an open listener on your perimeter firewall guarantees that adversaries will repeatedly test your defences until they find a weakness. The architectural shift requires eliminating inbound ports completely. By forcing all external integrations to rely on polled, outbound-only connections, you render the operational technology environment invisible from the public internet. This removes the attack surface at the edge, drastically reducing the friction of constant patching and external monitoring.

- Audit all perimeter firewalls for open inbound ports.
- Close direct inbound access for legacy remote desktop and virtual private networks.
- Replace inbound listeners with outbound-only reverse tunnels.
- Ensure external integrations rely on polled, outbound connections.
- Block search engines from mapping perimeter edge devices.

### *Modern Remote Access Architecture*

Granting full network routing via legacy virtual private networks is equivalent to handing a contractor the master keys to the building just so they can fix a single sink. Traditional jump boxes and shared remote access tools fail to provide the granular control required in modern industrial environments. Instead, access must be brokered through an identity-aware proxy that limits the user strictly to the application they need. Terminating the session at the application layer breaks the direct routing path, meaning an infected remote machine cannot spread malware laterally into the operational technology network.

- Deprecate legacy VPN appliances.
- Remove shared remote access software (such as TeamViewer or AnyDesk).
- Implement an identity-aware proxy for all external access.
- Require continuous authorisation for every remote session.
- Terminate remote sessions at the application layer rather than granting network routing.

### *Alternate Perimeter Control*

Vendors frequently install rogue cellular modems or undocumented wireless access points to bypass strict corporate firewalls, prioritising maintenance convenience over security. These alternate perimeters act as hidden backdoors, entirely subverting the primary boundary defences you have meticulously designed. Identifying and eliminating these undocumented connections is critical. By enforcing strict network access control and physically auditing the environment, you ensure that all traffic is forced through the designated, heavily inspected chokepoints.

- Conduct physical audits for unauthorised cellular modems.
- Disable or strictly isolate guest wireless networks near operational technology environments.
- Block unauthorised vendor backdoors installed during commissioning.
- Implement network access control for physical switch ports.
- Monitor for dual-homed devices bridging corporate and operational technology networks.

### *Outbound Traffic Restriction*

A compromised internal device must communicate with an external command and control server to receive instructions or exfiltrate data. If outbound traffic from the operational technology network is permitted by default, attackers face zero resistance once inside. Implementing a strict default-deny posture for outbound traffic turns this vulnerability into a containment trap. By explicitly defining which internal devices are allowed to communicate externally and filtering that traffic through deep inspection, you ensure that malicious beacons fail to connect, buying time for the security team to respond.

- Implement a default-deny posture for all outbound operational technology traffic.
- Restrict necessary outbound connections to explicit Internet Protocol addresses and specific domain names.
- Inspect outbound traffic using next-generation firewall application filters.
- Block outbound access to known malicious infrastructure and generic cloud storage.
- Alert on newly established outbound connection patterns.

## ***Identity &amp; Credentials***

Identity is the new air gap. In an environment where network boundaries are increasingly porous, robust authentication is the primary line of defence. This dimension mandates the shift to a zero-trust architecture where identity is unified and heavily verified. It covers the eradication of shared accounts, default vendor passwords, and static credentials. A modern approach dictates that third-party vendors bring their own identity provider, allowing operators to enforce granular, resource-specific authorisation (e.g., read-only access to a specific human-machine interface) without creating new local accounts.

![](https://www.agilicus.com/www/5cb82e67-image-1-3efc907c-0580-4c9d-9298-7f569ec4b2d3-scaled.jpg)    ### *Phishing-Resistant Multi-Factor Authentication*

Relying on passwords alone is a proven failure, and legacy authentication methods like short message service codes are easily intercepted by modern threat actors. When administrative access to an infinitely important role relies on weak authentication, the entire perimeter is compromised the moment a credential is stolen. We believe phishing-resistant multi-factor authentication, such as hardware security keys, is an absolute necessity. This ties the digital identity to a physical token, ensuring that stolen passwords cannot be leveraged by a remote attacker, adding a crucial layer of defence in depth.

- Mandate hardware security keys for highly privileged administrative access.
- Disable short message service and email-based authentication methods.
- Require multi-factor authentication for both local and remote access attempts.
- Protect authentication portals from brute-force and credential stuffing attacks.
- Implement session timeouts that require re-authentication for idle sessions.

### *Unified Identity and Single Sign-On*

When operators manage hundreds of separate local accounts across disparate human-machine interfaces, password reuse and credential sprawl become inevitable. This operational friction results in dormant accounts remaining active long after an employee departs, providing a silent vector for exploitation. Unifying identity through a single corporate directory eliminates this risk. Centralised authentication allows for immediate, global revocation of access when personnel change, ensuring that the operational technology environment remains strictly governed by a single source of truth.

- Centralise all human identities within a single corporate directory.
- Disable local accounts on individual programmable logic controllers and human-machine interfaces.
- Federate authentication using modern secure protocols.
- Ensure immediate session revocation when an identity is disabled centrally.
- Enforce unique, centrally managed passwords where single sign-on is technically impossible.

### *Third-Party and Vendor Access*

Creating static, local accounts for third-party vendors typically results in shared passwords and non-expiring access, making it impossible to confidently verify who is actually connecting to your systems. As supply chains become more integrated, managing the lifecycle of these external identities creates immense administrative overhead. Permitting vendors to authenticate using their own native corporate identity provider shifts the burden of credential management back to them. You no longer store their passwords; you simply authorise their verified identity to access specific resources for a limited time.

- Eliminate shared generic accounts.
- Allow contractors to use their own corporate identity provider.
- Implement time-bound access that automatically expires.
- Require vendors to meet device posture checks before authentication.
- Audit vendor access logs against active support tickets or maintenance windows.

### *Granular Role and Resource Authorisation*

Granting broad administrative access simply because an engineer needs to monitor a single process violates the core tenet of least privilege. When users possess more access than required, any compromised account becomes a highly destructive tool in the hands of an attacker. Granular authorisation ensures that roles are mapped tightly to specific tasks and individual devices. By decoupling network access from application access, you ensure that a contractor viewing read-only telemetry cannot inadvertently or maliciously issue a stop command to a programmable logic controller.

- Restrict access on a strict least-privilege basis.
- Map user roles directly to specific applications or individual assets.
- Separate read-only monitoring access from read-write engineering access.
- Prevent users from accessing unrelated resources within the same physical site.
- Require secondary approval for high-risk configuration changes.

### *Machine-to-Machine Authentication*

Static application programming interface keys are often embedded directly into scripts and rarely rotated, acting as immortal passwords for automated systems. If an attacker extracts one of these keys from a compromised server, they gain unfettered, persistent access to internal interfaces. The modern approach requires abandoning static secrets in favour of short-lived tokens and continuous authorisation. By cryptographically binding machine identities to specific hardware and requiring frequent token renewals, the window of opportunity for an attacker to misuse a stolen credential is fundamentally eliminated.

- Rotate service account passwords automatically.
- Replace static application programming interface keys with short-lived tokens.
- Bind machine identities to specific hardware using certificates.
- Enforce mutual transport layer security for system-to-system communication.
- Alert on anomalous service account usage outside of scheduled automation.

## ***Lateral Movement***

If an attacker breaches the perimeter, their next step is to navigate the internal network. This dimension addresses how easily a threat actor can move once inside. It requires moving away from flat networks and permissive routing. By applying micro-segmentation and eliminating the use of inherently vulnerable protocols like remote desktop or legacy Microsoft authentication, organisations can contain breaches. Defence in depth here means ensuring that even authenticated sessions are brokered at the application layer, preventing broad network discovery.

![](https://www.agilicus.com/www/cdbb83f7-image-1-af105fe4-8116-4e96-8bed-4ab5164771c9-scaled.jpg)    ### *Network Micro-Segmentation*

A flat internal network operates under the flawed assumption that everything inside the perimeter is trusted. Once an adversary bypasses the external firewall, this lack of internal friction allows them to move laterally with absolute impunity, discovering and compromising critical assets. Micro-segmentation divides the operational technology environment into tightly controlled functional zones. By enforcing strict rules that govern which devices can communicate with one another, you ensure that a compromised engineering workstation cannot freely connect to unrelated process controllers, containing the blast radius of a breach.

- Divide the operational technology network by functional process areas.
- Restrict communication between devices in different process zones.
- Implement host-based firewalls on all engineering workstations.
- Prevent programmable logic controllers from initiating connections to engineering workstations.
- Use software-defined perimeters to enforce access policies.

### *Legacy Protocol Eradication*

Legacy administrative protocols like Remote Desktop Protocol and Windows Admin Shares were designed for convenience, not security, and they are the primary mechanisms attackers use to navigate internal networks. Leaving these protocols enabled by default provides built-in highways for adversaries to deploy ransomware across the entire industrial control system. Eradicating or heavily brokering these protocols removes the native tools attackers rely on to live off the land. This significantly elevates the difficulty of lateral movement, forcing the attacker to generate noisy, detectable traffic.

- Disable Remote Desktop Protocol internally unless explicitly required and brokered.
- Block Windows Admin Shares and Server Message Block traffic between workstations.
- Prevent the use of cleartext protocols like Telnet and File Transfer Protocol.
- Restrict the execution of remote procedure calls.
- Monitor for the use of legacy industrial protocols lacking native authentication.

### *Application-Layer Brokering*

When users authenticate to a traditional gateway, they are typically granted access to a full subnet, allowing their machine to ping and map the surrounding network architecture. This broad visibility is an intelligence goldmine for threat actors conducting reconnaissance. Application-layer brokering fundamentally alters this dynamic by establishing connections only to the specific service requested, never dropping the user directly onto the network itself. Because the network topology remains hidden from the endpoint, lateral discovery becomes impossible, deeply reinforcing the overall defence in depth strategy.

- Prevent remote users from pinging or discovering unrelated network assets.
- Broker all administrative connections through a secure bastion or proxy.
- Inspect session traffic for malicious payloads before it reaches the asset.
- Enforce clipboard and file transfer restrictions on brokered sessions.
- Terminate and re-establish sessions at the gateway to break direct routing.

### *Credential Protection*

Microsoft Windows environments often retain legacy authentication protocols in memory, allowing attackers who compromise a single machine to extract hashed passwords and impersonate administrators without ever cracking the plaintext password. This pass-the-hash technique turns a minor endpoint compromise into a full domain takeover. Restricting legacy protocols and enforcing modern memory protections, such as Windows Defender Credential Guard, isolates these secrets from malicious processes. By defending the credentials themselves, you break the attacker’s ability to escalate privileges as they move laterally.

• Disable NT LAN Manager authentication globally.

- Enforce Windows Defender Credential Guard to protect secrets in memory.
- Restrict administrative accounts from logging into lower-tier workstations.
- Monitor for credential dumping tools and suspicious memory access.
- Implement restricted admin mode for necessary remote desktop connections.

### *Internal Traffic Encryption*

Unencrypted internal traffic permits an adversary who has gained a foothold to silently observe administrative sessions and capture sensitive commands or credentials in plaintext. While encryption at the boundary is common, failing to encrypt east-west traffic leaves the interior of the network completely exposed to eavesdropping. Enforcing end-to-end encryption using a managed internal certificate authority ensures that all system-to-system communication is secure. Combined with active certificate revocation checking, this prevents attackers from using stolen or expired certificates to forge trusted internal connections.

- Encrypt all administrative sessions (like secure shell and hyper-text transfer protocol secure).
- Deploy certificates from a managed internal certificate authority.
- Enforce active certificate revocation checking within the environment.
- Replace self-signed certificates on critical operational technology infrastructure.
- Isolate unencrypted legacy traffic on dedicated, heavily monitored network segments.

## ***System Hardening***

Threat actors often "live off the land" by exploiting native administrative utilities to avoid detection. This dimension focuses on the restriction of tools and the rigorous enforcement of configuration baselines on all equipment, from standard servers to programmable logic controllers. It encompasses the application of standardised security frameworks (like National Institute of Standards and Technology guidelines or Centre for Internet Security Benchmarks), the lockdown of scripting environments, and the strict management of firmware updates to ensure unauthorised changes cannot persist.

![](https://www.agilicus.com/www/eeafec07-image-1-9d9e071e-c2df-4667-9ca6-dedca0a8628c-scaled.jpg)    ### *Administrative Tool Restriction*

Modern threat actors prefer to 'live off the land,' leveraging native administrative utilities like PowerShell and the command-line interface to execute malicious payloads without triggering traditional antivirus alarms. If standard operators have unfettered access to these powerful tools, the system is primed for exploitation. Locking down scripting environments and enforcing application whitelisting ensures that only explicitly approved executables can run. This defensive layer strips the adversary of the built-in weapons they rely on, forcing them to import external tools that are much easier for security teams to detect.

- Restrict the use of PowerShell to designated administrative service accounts.
- Disable the command-line interface for standard operators.
- Implement application whitelisting to block unauthorised executables.
- Monitor the execution of native Windows Management Instrumentation tools.
- Remove unnecessary software and utilities from industrial control devices.

### *Configuration Management*

Operational technology environments are often manually configured, leading to a slow degradation of security posture known as configuration drift. When systems deviate from their secure baselines, it creates silent vulnerabilities and allows attackers to establish persistence through hidden scheduled tasks or altered startup services. Robust configuration management automates the enforcement of secure settings across all endpoints. By continuously monitoring for drift and immediately alerting on unauthorised changes, you guarantee that the intended defence in depth architecture remains actively enforced over the lifecycle of the equipment.

- Establish secure baselines for all operational technology endpoints.
- Monitor critical assets continuously for configuration drift.
- Track all configuration changes in a centralised change management system.
- Automate the deployment of secure configurations where possible.
- Alert on unauthorised creation of scheduled tasks or background services.

### *Standardised Security Baselines*

Deploying industrial equipment with factory-default configurations is equivalent to installing a high-security door and leaving the factory default combination on the lock. These unhardened systems contain unnecessary services, open ports, and default passwords that are universally known to attackers. Applying rigorous, industry-standard baselines, such as Centre for Internet Security benchmarks, systematically eliminates these vulnerabilities before the equipment enters production. This hardening process establishes a consistent, defensible foundation that significantly narrows the available attack surface for any adversary.

- Apply [Centre for Internet Security benchmarks](https://www.cisecurity.org/cis-benchmarks) or equivalent to all standard operating systems.
- Implement vendor-provided hardening guidelines for specific industrial control systems.
- Remove default factory configurations and default passwords before deployment.
- Disable unnecessary network services and daemons.
- Conduct periodic reviews of applied baselines against current threat intelligence.

### *Firmware and Software Integrity*

Supply chain attacks have demonstrated that even legitimate software updates can be weaponised if their integrity is not rigorously verified. Installing unverified firmware on a programmable logic controller can introduce deep, persistent backdoors that bypass all external network defences. A rigorous integrity management process ensures that every piece of software is cryptographically signed by a trusted authority before execution. By centrally managing and verifying these updates, organisations ensure that only authentic, unmodified code runs on their critical infrastructure.

- Centralise the management of firmware updates for all network devices.
- Verify cryptographic signatures of all software updates before installation.
- Maintain offline backups of known-good firmware versions.
- Test updates in a staging environment prior to production deployment.
- Enforce a strict change control process for all software modifications.

### *Immutable Infrastructure Principles*

Traditional systems treat data and operating environments as a single, changeable entity, meaning a successful compromise permanently alters the state of the machine. This allows rootkits and malware to embed themselves deeply within the system files. Immutable infrastructure principles decouple the operating system from persistent data, utilising read-only file systems wherever possible. If an unauthorised modification is detected, the system can be instantly reverted or rebuilt from a known-good image, effectively erasing the attacker's foothold and guaranteeing the absolute integrity of the endpoint.

- Utilise read-only file systems for critical operational technology devices where supported.
- Implement automated mechanisms to revert unauthorised changes instantly.
- Rebuild compromised systems from trusted images rather than attempting manual repair.
- Separate persistent data storage from the underlying operating system.
- Treat configuration as code to ensure consistent, reproducible deployments.

## ***Visibility &amp; Detection***

You cannot defend what you cannot see. As autonomous agents and sophisticated threats evolve, deep network visibility is mandatory. This dimension scores the ability to detect anomalous behaviour and respond swiftly. It shifts the focus from basic boundary logging to comprehensive, identity-aware audit trails that capture granular, layer-7 requests. It also mandates full traffic capture across all levels of the Purdue model and real-time, automated inventory management.

![](https://www.agilicus.com/www/551a1b4f-image-1-cb8f9785-5361-46d1-bf5a-97e2c96b2513-scaled.jpg)    ### *Centralised and Identity-Aware Logging*

Network-level logging that only records Internet Protocol addresses provides a fragmented, often useless picture during a forensic investigation. When multiple contractors share the same jump box, determining who actually initiated a malicious command becomes impossible. Centralised, identity-aware logging solves this by capturing highly granular, application-layer requests and tying every single action back to a specific, verified human identity. These cryptographically verifiable audit trails ensure non-repudiation, giving security teams the precise visibility required to understand exactly who did what, and when.

- Forward all security logs to a central security information and event management system.
- Ensure logs capture layer-7 details, including the exact file or uniform resource locator accessed.
- Tie every administrative action back to a specific, verified human identity.
- Protect log archives from tampering or unauthorised deletion.
- Retain audit logs in accordance with regulatory and incident response requirements.

### *Automated Anomaly Detection*

Static, signature-based antivirus software is routinely bypassed by novel malware and sophisticated attackers who operate entirely within the bounds of legitimate system tools. To detect these modern threats, defence mechanisms must understand what normal operation looks like in a highly deterministic industrial environment. Automated anomaly detection leverages behavioural analytics to continuously monitor protocol usage and data flows. When a programmable logic controller suddenly attempts to initiate an outbound connection to the internet: a stark deviation from its baseline, the system flags the anomaly immediately, catching the adversary before the attack escalates.

- Establish a baseline of normal network behaviour and protocol usage.
- Deploy behavioural analytics to detect deviations, such as unexpected outbound connections.
- Alert on abnormal login times or physical locations for operational technology personnel.
- Detect unusual data transfer volumes indicating potential exfiltration.
- Monitor for the introduction of new, unrecognised devices on the network.

### *Deep Packet Inspection and Traffic Capture*

Attackers who successfully breach the perimeter will attempt to blend their malicious commands into the ambient noise of standard industrial protocols. Basic firewall rules simply verify the port and Internet Protocol address, completely ignoring the payload inside the packet. Deep packet inspection examines the actual content of the network traffic, identifying malformed packets or unauthorised read-write commands hidden within legitimate protocols. Combined with continuous traffic capture across all layers of the Purdue model, this provides defenders with the ultimate truth of what is occurring on the wire, ensuring no malicious activity goes unnoticed.

- Deploy network intrusion detection systems at all critical Purdue model boundaries.
- Utilise deep packet inspection to identify malicious payloads within industrial protocols.
- Maintain rolling packet captures to support post-incident forensics.
- Monitor internal east-west traffic, not just perimeter north-south traffic.
- Correlate network traffic alerts with host-based security events.

### *Real-Time Asset Inventory*

You cannot defend what you do not know exists. Relying on static, manually updated spreadsheets to track operational technology assets guarantees that security teams are operating with an outdated and inaccurate map of the environment. A real-time asset inventory uses automated discovery to continuously track hardware models, operating systems, and specific software libraries. This dynamic visibility ensures that when a new critical vulnerability is announced, the organisation can immediately identify exactly which devices are affected, eliminating blind spots in the defence strategy.

- Implement automated tools to continuously discover and classify network assets.
- Track hardware models, operating system versions, and installed software packages.
- Identify and flag devices running unsupported or end-of-life software.
- Map dependencies between operational technology assets and required corporate services.
- Reconcile automated discovery data against physical inventory records periodically.

### *Automated Incident Response*

Human reaction time is insufficient when facing automated ransomware that can encrypt an entire facility in minutes. If the security operations centre must manually review logs, convene a meeting, and then issue firewall commands to stop an attack, the battle is already lost. Automated incident response translates visibility into immediate action. By programmatically isolating compromised assets or terminating suspicious user sessions the moment a critical anomaly is detected, the system contains the threat at machine speed, preserving the operational integrity of the larger environment.

- Develop automated playbooks for common threat scenarios.
- Isolate compromised assets dynamically using network access control or software-defined perimeters.
- Terminate active user sessions upon detecting suspicious behaviour.
- Block known malicious Internet Protocol addresses and domains automatically.
- Trigger immediate alerts to the security operations centre for critical operational technology events.

Incident response and reporting may be mandated for your jurisdiction and industry. Common such standards include:

- [Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA)](https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/cyber-incident-reporting-critical-infrastructure-act-2022-circia)
- [SEC Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure](https://www.sec.gov/rules-regulations/2023/07/s7-09-22)

## Appendix: Understanding the Purdue Model

If you are tasked with securing an industrial environment, the Purdue Model is the architectural blueprint you must understand before placing a single firewall. You cannot defend what you do not understand structurally.

![](https://www.agilicus.com/www/bc7dd644-purdue-with-agilicus-connector-overlay-purdue-only.png)    ### ***Origins and Purpose***

Developed in the 1990s by Theodore Williams at Purdue University, the framework was originally known as the Purdue Enterprise Reference Architecture. At its inception, it was not a cyber-security framework; it was an organisational model designed to map the flow of data and control in manufacturing. It defined how physical processes, basic control devices, and corporate business systems should logically interact to improve efficiency.

Over time, as Information Technology and Operational Technology began to converge, security practitioners adopted the Purdue Model. It provided the perfect structural map to define where network boundaries should exist. By segmenting the environment into distinct hierarchical levels, engineers could enforce strict access controls, ensuring that a compromised corporate email account could not directly issue commands to a physical pump or valve.

### ***The Hierarchical Levels***

The model divides industrial operations into separate zones, typically ranging from the physical process up to the corporate enterprise:

- **Level 0 (The Physical Process):** The actual physical equipment doing the work, such as motors, pumps, valves, and sensors.
- **Level 1 (Basic Control):** The devices that sense and manipulate the physical process, such as programmable logic controllers and remote terminal units.
- **Level 2 (Supervisory Control):** The systems used by operators to monitor the plant floor, including human-machine interfaces and Supervisory Control and Data Acquisition software.
- **Level 3 (Site Operations):** Systems that manage production across the facility, such as manufacturing execution systems or historian databases.
- **Demilitarised Zone:** The critical buffer zone inserted between the industrial network and the corporate network to broker shared services and prevent direct network routing.
- **Levels 4 and 5 (Corporate Information Technology):** The enterprise network where business functions, email, and corporate servers reside, eventually connecting to the external internet.

### ***Foundation for Industry Standards***

Because it elegantly defines the boundaries between different types of systems, the Purdue Model serves as the structural baseline for the world's most critical industrial cyber-security standards.

The International Society of Automation (ISA-99) and the International Electrotechnical Commission 62443 standards use the Purdue Model concepts to define "zones" and "conduits." These standards mandate that systems with similar security requirements be grouped into a secure zone, and that communication between zones only occurs through tightly controlled, heavily inspected conduits.

Similarly, the National Institute of Standards and Technology relies heavily on the Purdue Model. In their guidelines for securing industrial control systems, they use the framework to illustrate how defence in depth should be practically applied. When auditors or regulators assess an industrial facility, they invariably look for adherence to this layered, segmented architecture.

### ***Authoritative References***

For a deeper understanding of how the Purdue Model is applied to secure critical infrastructure, consult the following authoritative sources:

- [**United States Department of Energy**: Infrastructure Topic Paper on Industrial Control Systems Security. Provides a strategic overview of securing energy infrastructure based on segmented architectures.](https://www.energy.gov/sites/default/files/2022-10/Infra_Topic_Paper_4-14_FINAL.pdf)
- [**National Institute of Standards and Technology**: Special Publication 800-82, Revision 3. The definitive guide to securing industrial control systems, which relies heavily on the zone and conduit models to restrict lateral movement.](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r3.pdf)
- [**SANS Institute**: Introduction to Industrial Control Systems Security. A practical breakdown of how security professionals map defensive strategies to the hierarchical Purdue levels.](https://www.sans.org/blog/introduction-to-ics-security-part-2)
- [NERC CIP (Critical Infrastructure Protection)](https://www.nerc.com/standards/reliability-standards/cip)

## Appendix: Defence in Depth and the Swiss Cheese Model

### ***The Philosophy of Inevitable Failure***

Defence in depth is not merely a technical architecture; it is a philosophy built on the acknowledgement of inevitable failure. Historically, organisations relied on a single, hardened perimeter: a supposedly impenetrable air gap or a massive perimeter firewall. This approach is fundamentally brittle. A single layer of defence with a single strategy means that once a threat actor finds a vulnerability, they achieve total compromise. True defence in depth requires deploying multiple, overlapping defensive measures so that if one mechanism fails, another stands in its way.

![](https://www.agilicus.com/www/4d823d83-image-1-2b1ca48c-c6a2-4214-aaf9-3d125317cf89-scaled.jpg)    ### ***Uncorrelated Tactics and the Swiss Cheese Model***

The core mechanism of effective defence in depth is best illustrated by the [Swiss Cheese model of risk and security](https://en.wikipedia.org/wiki/Swiss_cheese_model). Imagine each layer of your security architecture as a slice of Swiss cheese. Every layer has inherent flaws, misconfigurations, or operational blind spots: these are the holes in the cheese.

If your defensive layers rely on the exact same strategy (for example, stacking three firewalls from the same vendor), the holes in the cheese perfectly align. A single exploit will pierce straight through. Therefore, it is critical that defensive tactics are uncorrelated and orthogonal. By combining entirely different control mechanisms: such as physical separation, application-layer identity verification, and deep packet inspection, you ensure the holes do not align. An adversary who bypasses a network control using a zero-day exploit will still be halted by an uncorrelated identity control requiring a hardware token.

### ***Zero Trust as a Key Component***

Zero Trust architecture is the modern catalyst for effective defence in depth. While legacy models assumed that anything inside the corporate network was trustworthy, Zero Trust operates on the assumption that the network is already compromised. It acts as a continuous, ubiquitous layer of verification that does not rely on physical network boundaries. By requiring explicit, continuous authorisation for every user, device, and application, regardless of location, Zero Trust provides an entirely uncorrelated layer of defence that complements traditional boundary controls perfectly.

### ***Key Activities and Posture Assessment***

Implementing and assessing defence in depth requires continuous operational discipline rather than a one-time installation. Key activities include:

- **Comprehensive Asset Discovery**: You cannot layer defences around assets you do not know exist.
- **Threat Modelling**: Identifying specific threat vectors to ensure diverse controls are applied where they are most needed.
- **Layered Control Implementation**: Deploying a mix of administrative policies, physical security controls, and technical safeguards.

To assess your defence in depth posture, organisations must abandon self-attestation in favour of rigorous, objective measurement. This involves conducting continuous vulnerability assessments, executing regular penetration tests that specifically attempt to chain exploits across different layers, and using structured frameworks such as Agilicus’ Cyber Security Assessment Scorecard to identify where defensive layers are dangerously correlated or missing entirely.

### ***Authoritative References***

For further reading on the foundational principles of layered security, consult these authoritative sources:

- **National Institute of Standards and Technology**: Measuring and Improving the Effectiveness of Defense-in-Depth Postures. A comprehensive analysis of how to objectively assess overlapping security controls. (<https://www.nist.gov/publications/measuring-and-improving-effectiveness-defense-depth-postures>)
- **National Institute of Standards and Technology Glossary**: Defense in Depth. The definitive technical definition of layered security. ([https://csrc.nist.gov/glossary/term/defense\_in\_depth](https://csrc.nist.gov/glossary/term/defense_in_depth))
- **Canadian Nuclear Safety Commission**: Defence in Depth. A practical look at how the philosophy of layered safety and security is applied in the most critical of physical environments, nuclear power generation. [(https://www.cnsc-ccsn.gc.ca/eng/reactors/power-plants/defence-in-depth/](http://(https://www.cnsc-ccsn.gc.ca/eng/reactors/power-plants/defence-in-depth/))
- **Wikipedia**: Defence in Depth. An overview of how the military strategy was adapted into information systems security. ([https://en.wikipedia.org/wiki/Defence\_in\_depth](https://en.wikipedia.org/wiki/Defence_in_depth))
- NIST Special Publication NIST SP 800-82r3 Guide to Operational Technology (OT) Security <https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r3.pdf>
- [TSA Security Directive Pipeline-2021-02C (SD02C)](https://www.tsa.gov/sites/default/files/tsa_sd_pipeline-2021-02-july-21_2022.pdf)

BOOK A MEETING

Ready To Learn More?

Agilicus AnyX Zero Trust enables any user, on any device, secure connectivity to any resource they need—without a client or VPN. Whether that resource is a web application, a programmable logic controller, or a building management system, Agilicus can secure it with multi-factor authentication while keeping the user experience simple with single sign-on.

[BOOK A MEETING](/book-calendar-meeting/)

First Name

Last Name

Email

Comment

Submit

![](https://www.agilicus.com/www/9f758437-agilicus-logo-horizonta.svg)info@agilicus.com, +1 ‪519 953-4332‬

300-87 King St W, Kitchener, ON, Canada. N2G 1A7

[![partner](https://www.agilicus.com/www/42b9b652-partner.svg)](https://www.agilicus.com/www/42b9b652-partner.svg)