The fragile triangle: Uncovering blindspots in vulnerability detection

When it comes to vulnerability detection, there’s a foundational triangle, consisting of three components – each of which depends on the others to effectively determine whether a piece of software is vulnerable. This includes:
1. The container image
This is the actual file system used in production - It’s layered artifacts contains:
- System packages: OS-level components installed via package managers (apt, apk, yum)
- Runtime environments: Programming language interpreters and compilers (Python, Node.js, Java)
- Application dependencies: Libraries and frameworks from language ecosystems (npm, pip, gem)
- Configuration files: Settings that affect security posture, permissions, and behavior
- Application code: Your custom software and binaries
- Metadata: Labels, environment variables, and other runtime instructions
The layered nature means vulnerabilities and sensitive information can be introduced, or masked, at multiple points in the build process. And the fact that images can be customized makes it harder for scanners to detect and map packages to CVEs.
.png)
2. Security advisories
These come from sources like the National Vulnerability Database (NVD), GitHub Security Advisories, and osv.dev. The job of these advisories is to collect CVEs, analyze and assess security properties, determine which software components are affected, and publish that information. This way, the public can query whether a specific piece of software is vulnerable and understand the context and meaning of that vulnerability.
Each advisory has specific focus areas. For example:
- GitHub advisory focuses primarily on the open source ecosystem and language packages.
- Distro advisories like Debian mainly focus on the operation system components to understand how their own distros are affected.
3. CVE scanners
These are tools that scan analyze container images through a systematic process:
- Layer examination: Unpacking the image to inspect its complete filesystem
- Package identification: Detecting installed software through package databases, binary analysis, and manifest files
- Vulnerability matching: Correlating discovered components with advisory databases using version comparisons and CPE mappings
- Risk assessment: Evaluating severity and exploitability in the specific container context
While scanners employ sophisticated techniques to minimize false positives, they can only detect vulnerabilities they're programmed to recognize and only in components they can successfully identify.
Each part of the triangle relies heavily on the others. The challange? If one component misses something, the others typically don’t compensate for it. This creates blindspots—and a fragile triangle of trust.
Blindspot #1: The Distro advisory trap
Each Linux distribution maintains its own security advisory, which is responsible for receiving vulnerability information—either directly from researchers or from upstream sources like NVD —and determining whether those vulnerabilities affect the distro’s software.
This process requires the distro to triage each CVE, which involves analyzing whether the vulnerability is relevant to their specific builds. It’s common for multiple distros to include the same OS library, but compile and use it differently. As a result, the security impact of a CVE can vary significantly depending on how the library is packaged, configured, or integrated.
Once a CVE is determined to be relevant, the distro must also assess its properties—like severity, exploitability, and whether any existing mitigations apply—so the advisory reflects an accurate risk level for that environment. This information is critical because CVE scanners rely heavily on distro advisories to determine what’s vulnerable and how serious it is. There’s no better expert than the distro that ships the software, and scanners trust their assessments by default.
But this process is far from perfect. A large part of it is still manual and prone to delays, inconsistencies, and oversights. In the wild, we’ve seen:
- CVEs that take too long to appear in advisories
- Severities that are incorrectly lowered
- Vulnerabilities wrongly marked as "not affected"
- vulnerabilities with wrong affected version/fixed version
These blindspots leave companies exposed to risks they don’t even know exist. And because every distro has its own standards, timelines, and SLAs for security updates, what’s acceptable to the advisory might fall short in commercial or production-grade environments. After all, when a critical CVE is published, you don’t want to be the one manually patching your stack while waiting for an advisory to catch up.
Blindspot #2: Missing or incomplete CPEs
CVE scanners rely on Common Platform Enumeration (CPE) data to determine whether a vulnerability applies to a given piece of software. But when a security advisory—especially from the NVD—misses or mislabels CPE information, the scanner fails to detect it. The CVE exists. It's public. It might be exploitable in your environment—but it's invisible to your tools.
This isn't rare or accidental—it’s systemic.
As of early 2025, NIST reported a backlog of over 20,000 CVEs pending enrichment, with CPE population being one of the most time-consuming steps. According to platforms like VulnCheck and Securin, these gaps mean that critical vulnerabilities are routinely published without the metadata that scanners need to catch them.
The problem is worsened by the fact that filling CPEs is hard. It requires nuanced judgment about what software is affected, under which versions, and in what context. Even seasoned analysts get it wrong—and the NVD doesn’t scale well to handle the growing volume.
So, while you’re waiting for the database to catch up, attackers may already be exploiting the very vulnerabilities your scanner can’t see.
Blindspot #3: Scanner evasion via tampering
Whether intentional or not, it’s surprisingly easy to break how scanners detect installed packages.
For context, each distro has a file that contains a long list of all of the operation system packages installed through the disto package manager. For example, Alpine uses lib/apk/db/installed and both Debian and Ubuntu use /var/lib/dpkg/status. CVE scanners read them to identify what’s present in the image and match those packages against known vulnerabilities.
The problem? If someone changes, deletes, or encrypts these files, even unintentionally, the scanner can no longer read them. And if the scanner can’t read them, it can’t detect the associated vulnerabilities. This means it’s possible to have a vulnerable package in the image that the scanner wouldn’t flag it simply because it can’t see it anymore.
Blindspot #4: Software installed outside the package manager
While CVE scanners rely heavily on the system package manager to determine which software components are installed, it works best if the software was actually installed using the package manager in the first place.
In reality, it’s common to see Dockerfiles that download source code directly, build it manually, and install it outside of the package manager. These components won’t show up in the usual package metadata, which means scanners have challenges to find reliable way to detect them.
If the scanner can’t find the software, it can’t flag the vulnerabilities that come with it. So even though the vulnerable code and/or binary is present in the image, it may go completely undetected.
The real-world impact
In an upcoming post, we’re going to unpack a real-world case study that touches nearly every organization relying on third-party vulnerability intelligence. Without spoiling the details, it exemplifies how misplaced trust in “official” data sources can quietly erode your security posture over time. The case exemplifies how what seems like a closed case at first glance may, in fact, still be wide open.
How to detect and eliminate blindspots
Based on our internal process, here are the best ways to close the visibility gaps:
- Don’t rely on a single security advisory source.
- collect information and context from multiple sources, to evaluate the CVE accuracy, when it possible, build a vulnerability intelligence pipeline that:
- Aggregates data from multiple sources like the NVD, GitHub Security Advisories, distro-specific feeds, and independent security research.
- Cross-references vulnerability data to catch inconsistencies in severity or affected versions, and to fill gaps in CPEs.
- Validates each CVE before acting on it, confirming it actually applies to your environment.
- Tracks zero-day vulnerabilities before they receive an official CVE assignment.
- collect information and context from multiple sources, to evaluate the CVE accuracy, when it possible, build a vulnerability intelligence pipeline that:
- Don’t rely on a single CVE scanner.
Aggregate scan results from multiple tools like Trivy and Grype to improve coverage and reduce false negatives. - Use minimal, purpose-built base images.
Use container images that include only the components you actually need:- Fewer components means fewer vulnerabilities to manage.
- Hardening and reducing your base image lowers your overall attack surface.
This layered approach gives you better visibility into what’s really in your containers – and which vulnerabilities actually matter.