In the past few years, the Zero Trust Security Model, or Zero Trust Architecture, has gained popularity from both technical and non-technical audiences.
It’s not hard to understand the reasons behind this success: the trust-by-design approach is outdated and has shown how badly things can go when a device or application is trusted a priori, without any dynamic consideration of context. Simply put, assuming a devices and applications can be trusted just because they sit on the same network is an old, unsafe, and incorrect assumption.
There’s No Such Thing as a Free Lunch
Properly applying zero trust (ZT) requires studying how systems interact, what they need from each other and how to minimize information access. This might seem simple, but it’s not: especially if the interacting systems already exist and will need to be evolved or adapted to implement ZT.
As cybersecurity professionals, we don’t want to do a check-the-box job for the new thing that will make our boss happy. We want to implement what the theory says in the most rigorous manner, or we’ll end up dealing with the same issues as the outdated trust-by-design-approach (with some extra complexity). Think about how many times that self-signed local TLS certificate is accepted-even-unsafe and is never deployed correctly with a CA-emitted one. There are endless examples.
As the popular adage suggests, there’s a lot to do to truly accomplish this goal.
In principle, the desired endgame of a ZT implementation requires all components (not some, nor the most important, nor just known, but all) to interact with each other with all verifications of access in place. But to accomplish this, we need all devices, applications and communication protocols to be designed with ZT in mind. Is this possible in all cases today? No. But we can draw a plan to get there.
To achieve a state of zero trust, I see two major milestones: the first where the old habits of trust-by-design are removed, but issues still remain because of technology limitations, and the second wherein every piece of the system supports the ZT approach.
While the first milestone should be achievable with enough effort, the second milestone will take time and will likely happen incrementally. More importantly, it would require the best technology talents to cooperate in the various communities of standardization.
What’s the path to zero trust? The first milestone is moving away from trust-by design, and second is fix technology limitations that prevent full implementation of zero trust.
Removing the Bias: A First Milestone Towards Zero Trust for OT and IoT
The mantra of zero trust is to not trust by design, but always verify if a user that authenticated on an application/device/system has the rights to access a given resource, in a given context (where part of this context is the health status of the device).
This is generic enough to work, but the devil is in the details! How granular is the system requesting access to a resource? Is it a full device, an application, or a specific session? Or even more granular than that? And, how big is the resource being requested for access? Is it a full system, a file, a record, or an attribute?
Remember: We Don’t Want to Check a Box
Our goal for this first milestone is to build the skeleton of ZT—without assuming it’s the endgame. In the NIST.SP.800-207 document, three main approaches are described for adopting ZT: ZTA Using Enhanced Identity Governance, ZTA Using Micro-Segmentation and ZTA Using Network Infrastructure and Software Defined Perimeters. None of these are either easy to achieve or guarantee full compatibility with existing systems.
We can deploy some form of Policy Decision and Enforcement points, like agents and/or properly capable and configured firewalls. This is definitely part of the first milestone: remove the bias towards “same LAN = more secure” and start to think differently. But if we look at it more closely, it won’t completely solve the problem that ZT wants to solve.
Let’s See Why
Consider a minimal OT setup with one HMI or SCADA connecting with a PLC using Modbus, DNP3, or IEC 104. And let’s suppose we manage to deploy a ZT-enabled agent on the HMI and microsegment the network so that the two actors have a dedicated network for them to communicate. We could still have a malicious update (let’s say a supply chain-based attack where the automation vendor has been compromised) being installed on the HMI during maintenance via USB key. The decision to still allow the HMI to access the PLC will be binary: yes or no.
Because of the protocol limitations there, it will be complex (or practically impossible) to allow granular access to the PLC because of app-level semantics; for example the controlled sensors/actuators. With ZT terminology in mind, it won’t be that straightforward to limit the concept of resource to a small enough area—it will likely be the whole PLC. Everything or nothing.
A related challenge is given by communication protocols again: the example above used open, standard protocols. But OT and IoT networks are characterized by a huge variety of proprietary protocols, which makes such aspects difficult to tackle. And of course security through obscurity is not a solution.
Another challenge is related to session authentication without changing the applications: many OT and IoT protocols either don’t have authentication and authorization embedded, or they have a very basic model for that. And still, operators often log in with shared accounts, but that’s another story.
What About Asset Health?
Moving to another topic: asset health. An asset should be granted access to resources also considering its health. Meaning, if patches are not applied, an anti-virus is not installed on the endpoint or it has outdated signatures, that asset should only have access to those services which allow it to be updated and try again. For example, in OT environments, it is quite common for automation vendors to validate Windows updates before allowing them to be installed on HMIs. A pragmatic ZT deployment should take this into account and allow a certain gap in patches. Again, the theoretical ZT model won’t be fully applied, leaving space for unmanaged risk, ultimately coming from a false sense of complete-and-absolute-security.
It should now be clear that protocol-related challenges make ZT implementable, but with some limitations and tradeoffs ultimately given by legacy specifications and contexts.
Fixing Old Limitations: A Second Milestone Towards Zero Trust for OT and IoT
The first milestone is about removing the old attitude that two assets living on the same network have more trust than others. But as we have seen in the previous section, it is not possible to do a perfect deployment right now. With this second step, we want to fix those shortcomings.
And it won’t be just on us. This can’t be a Project Milestone for Customer X, Y or Z. It will need more time and cooperation, and will be mostly like an asymptote to aim to—which means, in certain situations this second milestone will never be “done,” or “completed.” But, it will be more complete as each piece of the puzzle goes in the right place.
Regarding the protocol limitations, there are two possible approaches: either standard protocols are evolved to incorporate ZT principles and are adopted by the industry, or the various vendors adapt and evolve the protocols and provide a solution (an agent?) to implement properly ZT.
To allow standard protocols to embed authentication and authorization into their flows, and have more granular access to resources, efforts are being made on several fronts. For example in the power systems domain, the IEC 62351 family of standards is adding useful building blocks to make ZT deployments more advanced. The ODVA standard development organization has improved the EtherNet/IP OT communication protocol, adding a comprehensive update with the CIP Security extension.
Allowing protocols to evolve and to offer proper security features for all verticals and contexts will take time. We suggest to join the working groups that define what is important to you and staying updated, or even contributing to the necessary evolutions.
There are certainly other aspects which need revision, we mentioned for example the timeliness of patches vs. guarantee of proper quality. It will take time to evolve the software infrastructure to allow faster, more automatic validation and at the same time a good integration with the asset health services—to allow the PDP/PEP to take the right decision given the context.
But What About the Purdue Model?
The world of technology is evolving, converging—and nowadays it’s interesting to look at a computer network and tell what’s IT, what’s OT, what’s IoT (or IoMT, IIoT, etc.). Edges are blurring and sometimes it’s better to forget about labels and remember that at the end of the day, it’s all just technology. One single system to be secured in the best way possible.
With that, in the OT world the Purdue Model (or better, the Purdue Enterprise Reference Architecture) has existed for decades and is used as a reference to design new systems and evolve old ones.
The gist of the Purdue Model is to divide the network into levels, where each level can communicate only with its close neighbors, and where security for the underlying process becomes more stricter as the levels increase (and thus are further from the automation equipment).
At each level of the Purdue Model, assets can only communicate with close neighbors, and security increases at each level.
What do zero trust and the Purdue Model have in common? They are both solutions to interconnect assets in an organized manner, trying to minimize undesired network access to certain resources. However, ZT incorporates additional concepts that potentially make the Purdue Model outdated. We don’t necessarily want to open up that conversation here, but one possible way to see ZT from a Purdue Model perspective is that more controls need to be deployed because the sole separation in layers is not enough; on the other hand, ZT may suggest a more flexible and likely equally secure architecture for OT.
Wrapping Things Up
As we have seen, ZT for OT and IoT is as much about verification as it is (lack of) trust.
But it’s good to start adopting it, at least as a mindset. Zero trust will evolve for the next 5-10 years—maybe with the same name, maybe with other names, but the concepts are here to stay and evolve for sure.
Nozomi Networks can help with this transition: because visibility, continuous monitoring and health knowledge for each asset are a core piece of ZT. And the good news is that such transition can start in a very non-intrusive manner.
At the same time, it’s important to understand that deploying Nozomi Networks and similar solutions is just the start. A full transition to zero trust takes extensive planning, time and effort.
Co-Founder and Chief Technical Officer
Armed with a Ph.D. in Artificial Intelligence and an extensive background in systems engineering and software development, Moreno Carullo has led the way in redefining the ICS cybersecurity product category. A long-time member of the IEC TC57 WG15 subcommittee, he is also actively working to shape cybersecurity standards for power system communication protocols. As Founder and Chief Technical Officer at Nozomi Networks, Moreno leads an exceptionally talented software development team that uses agile development to quickly address the cybersecurity requirements of enterprise customers and partners.