Revenera logo
Image: Likely Exploited Vulnerabilities (LEV): Breaking Down the New Metric from NIST

Shortly after posting our recent blog on CISA’s KEV Catalog, the National Institute of Standards and Technology (NIST) proposed a new metric for Vulnerability Exploitation Probability: Likely Exploited Vulnerabilities (LEV).

NIST’s effort on Likely Exploited Vulnerabilities (LEV) stems from the need to address a critical gap in vulnerability management. With tens of thousands of vulnerabilities published each year yet only a small percentage being actively exploited in the wild, organizations face the challenge of allocating limited remediation resources effectively. Existing approaches, such as the Exploit Prediction Scoring System (EPSS) and Known Exploited Vulnerability (KEV) lists, have limitations. EPSS tends to underestimate risks for vulnerabilities that have already been exploited, and KEV lists may not cover the full scope of exploited issues. In response, NIST has developed the LEV metric as a probabilistic measure, using historical EPSS data to quantify the likelihood that a vulnerability has been observed being exploited, filling an important metrological gap without presupposing its comparative benefits.

Let’s dig deep into how NIST is arriving at LEV without needing to understand the complex math.

The LEV Equation

Imagine you have a giant list of vulnerabilities (CVEs), where each vulnerability comes with a chance, or probability, that it has been exploited. However, we know from real-world data that only a few of these vulnerabilities are taken advantage of by attackers. The “Expected Proportion of Exploited CVEs” is a way to calculate, in simple terms, what fraction or percentage of all these vulnerabilities have likely been exploited over time.

This calculation uses what’s called the Expected_Exploited() equation. In everyday language, think of it like this: for each vulnerability, you have a tiny score (based on past data) that tells you how likely it is that someone has exploited it at some point. By adding up all these little scores and then comparing that total to the whole list, you get a rough (and conservative) estimate of the proportion of vulnerabilities that have been exploited. Even though we can’t see every single attack in action, this method gives us a baseline idea of how many of the vulnerabilities might have been used by attackers.

How Comprehensive Are KEV Lists?

KEV lists are collections of vulnerabilities that are known to have been exploited in real-world scenarios. However, until now, there was no easy way to tell if these lists were capturing all the important vulnerabilities. What the LEV probabilities do is provide a method, a formula called the KEV_Exploited() equation, that estimates a lower bound on the number of vulnerabilities that should appear on a KEV list. In simple terms, this equation adds up the “chance scores” (LEV probabilities) for each vulnerability that falls within the relevant scope of the list. The idea is that the total sum gives an estimate of how many vulnerabilities have been exploited, which you can then compare with the actual KEV list to see if anything might be missing.

Improving Remediation Prioritization with EPSS and KEV

Imagine you have two tools to help decide which vulnerabilities to fix first. One tool is a KEV list, a catalog of vulnerabilities that are known to have been exploited, and the other is the EPSS score, which predicts the chance a vulnerability might be exploited soon. However, both tools have their limits. A KEV list might miss some vulnerabilities that have been exploited, and EPSS scores can underestimate the risk for vulnerabilities that have already been exploited.

To improve this, LEV scores are introduced as a supplement. For KEV-based prioritization, you can create an LEV list by selecting vulnerabilities that have a high LEV probability. This extra list helps spot potential vulnerabilities that might have been left off the KEV list, allowing security teams to investigate and remediate them as needed. On the EPSS side, instead of relying solely on the sometimes inaccurate predictive scores, you can adjust the numbers: if a vulnerability appears on the KEV list, its risk is bumped up to the maximum value (1.0). Then, by combining this adjusted figure with the original EPSS score and the LEV probability (essentially taking the highest of the three), you get a composite score that better reflects the true risk. This combined approach helps ensure that vulnerabilities are prioritized not just based on prediction, but with an informed view of past exploitation as well.

Conclusion

While the accuracy of prediction scoring systems remains a subject of debate, it is encouraging to see security agencies making efforts to bring more predictability to cybersecurity and also acknowledging in the publication that there is always a margin of error that’s not known yet. Initiatives like LEV demonstrate a commitment to refining vulnerability assessment frameworks, offering a structured approach to understanding exploitation trends.

As the metric evolves, we at Revenera Software Composition Analysis (SCA) will look to incorporate emerging metrics like LEV into your prioritization workflows, so you can focus remediation where it matters most. Learn more about Revenera SCA.