With the increasing number of connected devices and CVEs released each year, organizations must prioritize assessing and mitigating cyber risks. This is where vulnerability management becomes a necessity.
As part of the vulnerability management process, Vulnerability Assessment (VA) involves identifying and prioritizing vulnerabilities in devices, networks, and software within an organization. By scanning and analyzing these assets, risk managers can quickly identify new CVEs (Common Vulnerabilities and Exposures) that have been released each year.
Once the vulnerabilities are identified and prioritized, organizations can take steps to address them, such as patching or mitigation through security controls like encryption or access controls. Additionally, vulnerability management also includes periodic scans of networks to detect new threats or changes in existing vulnerabilities to ensure proper security posture is maintained over time.
In this blog, we describe the ICS vulnerability assessment process, explain how we identify CVEs, and help organizations to better protect their digital assets and reduce the chances of malicious activity occurring.
The ICS Vulnerability Assessment Process
The VA process comes with 3 major steps:
- Identify the assets and assign them Common Platform Enumerations (CPEs)
- Identify the Common Vulnerabilities and Exposures (CVEs) that are being published
- Bind CVEs to the correct CPEs
To facilitate these steps, there needs to be a unique and accurate database in which an authority can publish the CVEs discovered, follow a strict standard structure, and undergo a trustworthy review process. In an ideal world, the previous statement would have been true. Unfortunately, it’s far from reality.
The National Vulnerability Database (NVD), which is maintained by National Institute of Standards and Technology (NIST), is currently the closest alternative available. An example of a CVE on the NVD website can be seen in Figure 1.
Despite its existence, the NVD has inherent characteristics that limit its reliability. One major issue is that it is not a primary source of information, resulting in delayed content releases.
The following sections describe the steps of the ICS vulnerability assessment process, what issues we run into, and potential solutions.
Step 1: Identifying the Assets and Assigning a CPEs
The objective here is to define a standardized approach for naming product classes via a standard naming convention for every product or product family. MITRE created one called CPE, which provides a standard to share information about products and their vulnerabilities. Without a proper naming convention, sharing information about a vulnerability would be very difficult, as everyone would be speaking a different language.
There are usually three ways to identify an asset:
- Manual identification: conducting an exhaustive examination of each device by reading its datasheet or accessing the relevant project files within the software used for device configuration.
- Passive Network-based identification: listening to the device traffic to fingerprint it.
- Active Network-based identification: querying a device to read all the values needed to fingerprint it.
It is not advisable to only use manual identification, as it lacks scalability and can be unreliable. There is a limited ability to efficiently handle large amounts of data along with a higher likelihood of error.
The passive Asset Identification Strategy (AIS) is more efficient, scalable, and does not impact the network performance too much; this is the main approach used by Nozomi Networks. Although beneficial, the limitation is that not all devices communicate all the information needed to passively generate an accurate CPE. For example, a vulnerability may directly affect a specific CPU configuration used by a PLC, however a PLC may not transmit the model of the CPU that it uses. During passive AIS, organizations must take into account both the product model as well as its CPU configuration when assessing cyber risks related to this type of vulnerability. Another issue is that some devices may encrypt the traffic they produce, making it more difficult to identify them directly. In this situation VA might want to rely on side channel detection.
The active AIS is certainly the most powerful strategy because you can conduct both unauthenticated or authenticated device queries to extract all the required information. But this comes at a price, as it could potentially add more load to the network and to the queried device itself. Moreover, by running authenticated queries you might create a security risk or break the device if not implemented properly.
Now, there are two standards for CPEs that are widely used: versions 2.2 and 2.3.
- Version 2.2 might look simpler, but it has less attributes and is less versatile.
- Version 2.3 supersedes version 2.2, introducing Well-Formed CPE Name (WFN) and adds new features suggested by the CPE user community. Moreover, it is backward compatible with version 2.2 using the URI binding.
According to NIST, “CPE Version 2.3 departs significantly from past CPE practice by breaking the CPE standard into a suite of separate specifications organized in a stack: Naming, Name Matching, Dictionary, and Language. Naming is the foundation of the stack, with each specification building on those that precede it.”
URI binding is the process to convert WFN to a machine-readable encoding, it is reversible, and it is defined in the 2.3 CPE Specification.
WFN example, CPE 2.3:
URI binding WFN:
As shown above, when using URI binding, there are a limited number of attributes to make it compatible to version 2.2. In fact, the previous version has the following structure:
This means that you will need to pack more data inside a single attribute as in the example above. Therefore CPE 2.3 format is more versatile than the previous one.
It is not enough to just use proper CPEs to easily solve all the problems related to this step; it is also about data quality and consistency. Products are often bought from alternate vendors which can change the CPE vendor, as well as result in product renaming and inconsistent versions. Additionally, CPE granularity between past and present products may be subject to varying standards.
To address these challenges, we implement customized solutions for individual products and put lots of extra effort towards improving and standardizing the data from NVD using a combination of manual and automated techniques.
One example of this kind of problem is the Hikvision sub vendor naming. Hikvision manufactures cameras, DVRs and other devices for many vendors that label them with their brand. These devices transmit their information over the network using SADP protocol, which is a very efficient way to identify them. The issue with this approach is that from the SADP protocol we can’t extract the vendor’s name to create the proper CPE because the protocol always depicts the vendor as Hikvision. To solve this and create proper CPEs, we managed to build an automation tool capable of extracting the vendor’s name in other ways.
Step 2: Identifying Published CVEs
This step is all about timing. When a CVE is released by a primary source of information, such as a vendor advisory, it may take NVD weeks or months to review and distribute the information about that CVE.
For this reason, you should theoretically be able to promptly parse all the primary sources of information in existence. Scaling this approach is difficult because they are distributed in various formats (there are a lot of them) so it would require a lot of manual effort.
To overcome this issue, some larger vendors have begun to distribute their advisory in Common Security Advisory Framework (CSAF) format, which is a JSON file with standard fields, making things easier for everyone. See Figure 2 below:
Although this standard provides decent consistency of data, it is being used in slightly different ways by larger vendors while smaller vendors are not even using it. This means that some data normalization and enrichment is still needed to provide a high-quality product to customers.
Step 3: Binding CVEs and CPEs
Finally, it is crucial to associate CVEs with the correct CPEs to determine which CVEs affect the organization. The critical success factors for this step are maintaining accuracy and ensuring that the information remains up to date.
When marking a specific product vulnerable to a CVE you can be as specific as you want. Let’s take a look at some examples:
- Product advisory example 1: “Software version below 13.3 with vulnerable extension enabled.”
- You are not always able to identify whether the vulnerable extension is installed, much less whether it is enabled, so you must make a trade-off.
- Product advisory example 2: “PLC XYZ when configured to use Modbus protocol on port 3.”
- According to CPE naming specification, version 2.3 representing user-defined configurations of installed products is out of scope.
Given the examples above, we can improve the accuracy of our results and make the process more effective by expanding our data collection beyond the CPE. If that is not possible, it is necessary to find the best balance between false positives and false negatives. It is preferable to prioritize sensitivity (recall) over specificity (true negative rate). However, it is important to comprehend the incremental cost of elevating sensitivity as it can result in an excessive amount of false positives, making the entire process ineffective.
For optimal data quality, it is important to keep in mind that once a CVE is published, the impacted products may change as patches are released and further investigation takes place. Typically, CSAF advisories and NVD are efficient in providing notifications regarding updates to CVEs.
ICS vulnerability assessment is a crucial process for a lot of organizations but quite often these steps are not as straight-forward as they seem. Despite the inefficiencies in the identification and reporting ecosystem, it is important to respond quickly when CVEs are released. To accomplish this, it is possible to rely on both NVD and vendor advisories using both automation and manual effort to overcome the obstacles with high accuracy.