The future of NVD

What is the National Vulnerability Database (NVD)?
The National Vulnerability Database (NVD) is the U.S. government’s central repository for information about publicly disclosed cybersecurity vulnerabilities. For years, it has served as a critical resource for security teams, software vendors, and researchers to assess risk and prioritize remediation efforts.
Recently, however, the NVD has been making headlines for a very different reason. A significant slowdown in its vulnerability enrichment process has disrupted downstream tools and workflows, raising broader questions about the long-term sustainability of the system that so many have come to depend on.
Where it all began
The path to the NVD began in 1999, when MITRE – one of the most influential entities in cybersecurity, defense, and public systems – launched the Common Vulnerabilities and Exposures (CVE) system with support from the U.S. Department of Homeland Security. The goal was simple but transformative: establish a CVE list that would create a unified standard for vulnerability identifiers. This gave birth to what we now know as CVE IDs – unique identifiers for publicly disclosed security flaws. While the CVE list brought much-needed structure to the landscape, it didn’t include details like severity scores, impact analysis, or mappings to affected products. This left security teams with a common reference point, but limited actionable context.
Following 9/11, in 2002 Congress passed the Cybersecurity Research and Development Act alongside the Federal Information Security Management Act (FISMA), requiring federal agencies to maintain secure IT systems, continuously assess risk, and track and remediate vulnerabilities – all of which demanded a more robust, enriched vulnerability resource than what the CVE system could provide on its own.
The rise of the NVD
In 2005, the NVD was officially launched by NIST to enrich the foundation of CVE identification with standardized, structured, and machine-consumable vulnerability metadata. It introduced CVSS scores to reflect the severity of each vulnerability, CPE (Common Platform Enumeration) strings to identify affected software, XML feeds for vulnerability scanning and management tools, and search/filter capabilities that made it usable across public and private sectors.
It quickly became the single source of truth for vulnerability intelligence – something every scanner, SIEM, GRC platform, and risk model would heavily rely on. NVD became “too central to fail.” Nearly every security workflow – from SLAs to patch processes – now depends on its data.
Cracks in the foundation
But even in its early days, cracks started to show.
- There were long delays between CVE publication by MITRE and NVD enrichment.
- CPEs were often vague or overly broad, leading to false positives or negatives.
- CVSS scoring failed to reflect the real-world exploitability of a vulnerability – and severity standalone doesn't always correlate with risk.
And in recent years, this tension has only intensified. On one side, the industry was advancing with more sophisticated approaches to supply chain security, including SBOMs and VEX documents. On the other, NVD remained stuck in an enrichment model that couldn’t keep pace with the flood of new CVEs – hundreds, sometimes thousands, per week.
Security teams grew increasingly helpless at the growing volume of CVEs and vague or incomplete metadata which made it impossible to fix every vulnerability. This fueled the rise of CNAPP platforms and other solutions, which instead of remediating, focused on distinguishing critical, exploitable, and reachable vulnerabilities that must be prioritized – as opposed to those that are "okay" to live with. This in and of itself is not ideal; it means that there's usually thousands (if not millions) of vulnerabilities that companies expose in production because there's no straightforward way of resolving them, and they're not considered important enough (yet) to invest in the hard way.
The 2024 slowdown
In early 2024, things went seriously wrong. Budget cuts and staffing constraints led to a major backlog in NVD enrichment. Thousands of CVEs were published without accompanying CVSS scores or CPE data, leaving the NVD with no useful metadata – creating risky blindspots for customers. Vendors began issuing public warnings, with some even pushing for community-driven alternatives, like OSV, to fill the gap.
The 2025 close call
Things came to a head in early 2025, when MITRE issued a stark warning: unless new funding was secured, CVE assignments would stop on April 16. The next morning, CISA stepped in with emergency funding to extend MITRE’s operation for another year, narrowly avoiding a shutdown. But while the crisis was averted, the underlying fragility of the system was undoubtedly exposed.
This wasn’t just a budget scare. It was a wake-up call for an industry that has built critical infrastructure on top of a resource that is underfunded, understaffed, and under immense pressure. To be blunt, the NVD is at real risk of further degradation, and the ripple effects could be severe.
So, what now?
It seems that at this point, we’re left with more questions than answers.
- If the NVD can’t scale, who fills the vacuum?
- Could we see a shift where the open source community is responsible for creating and maintaining the vulnerability database and enrichment of vulnerability data?
- Will security vendors that rely on this data step in to maintain it?
- Could we see a splintering of standards, with each enterprise maintaining its own view of the world?
The list goes on. And so do the deeper questions about where vulnerability management is headed. The industry’s long-standing dream of “shift left” depends on timely, reliable data early in the development lifecycle. If foundational systems like the NVD can’t modernize, does the burden shift right – toward runtime signals, exploit telemetry, and in-the-wild confirmation of risk?
And what happens to compliance-driven environments like FedRAMP, which currently mandate vulnerability scanning and risk assessment (which essentially comes from the NVD)? Is there a next-generation NVD on the horizon, or will we see entirely new models for vulnerability intelligence emerge?
Regardless of how the future plays out, one thing is clear: reducing the attack surface will always be the surest defense. Because fewer packages means fewer vulnerabilities to worry about, no matter what.