Vulnerability scanners find CVEs accurately. That’s not the problem. The problem is that finding CVEs and finding what matters are different things, and most vulnerability scanners conflate them.

A scanner that reports 800 CVEs in a container image has done its job. What it hasn’t done is tell you which of those 800 CVEs represent actual exploitation risk in your specific application running in your specific environment. That determination—which is the one that drives real security decisions—requires information the scanner doesn’t have.


What Scanners Actually Report?

Standard container image vulnerability scanners work by:

  1. Extracting the package inventory from the container image (OS packages, language runtime packages, application packages)
  2. Comparing that inventory against CVE databases (NVD, vendor advisories, distribution-specific databases)
  3. Reporting every CVE that matches any installed package version

This is correct and complete as a static inventory matching exercise. The output accurately reflects: “every CVE that exists in every installed package in this image.”

What the output doesn’t reflect: which CVEs are in packages that execute when the container runs, which vulnerable functions are called by application code, whether exploit conditions apply in this deployment context, or whether compensating controls mitigate specific CVE categories.

The scanner produces an inventory match. Security decisions require contextual analysis.


The Three Things Scanners Miss

1. Runtime execution context

A container image may have 400 installed packages. A specific application running in that container may actively load and use 40 of them. The other 360 are installed—as OS dependencies, as framework compatibility packages, as build artifacts—but never loaded during application execution.

Container vulnerability scanner output for this container lists CVEs from all 400 packages. The CVEs in the 360 non-executing packages look identical in the report to CVEs in the 40 that execute. Both get CVSS scores, severity ratings, and findings entries.

But the CVEs in the 360 non-executing packages are not exploitable through the application. The vulnerable functions are in libraries that never load; there’s no code path from any application entry point to the vulnerable code.

A scanner that doesn’t incorporate runtime execution data cannot distinguish between these two populations. The finding list contains both, with no indication of which is which.

2. Application-specific reachability

Even within the 40 packages that do execute, specific CVEs may be in functions that the application never calls. A package that’s loaded (and therefore appears in runtime execution data) may have a CVE in a feature the application doesn’t use.

Static call graph analysis—tracing which functions the application calls and whether they reach the vulnerable function—provides this finer-grained reachability assessment. Most scanners don’t perform this analysis; they report CVEs at the package level, not the function level.

3. Exploit condition applicability

Many CVE descriptions include conditions under which the vulnerability is exploitable: specific input formats, specific configuration states, specific authentication states. These conditions may not apply to your deployment.

A CVE that requires an attacker to supply a malformed XML document is not exploitable in a service that doesn’t accept XML input. A CVE that requires authentication bypass is not exploitable in a service that doesn’t have authentication in the relevant code path. Scanners report these CVEs because the package is present; they can’t assess whether the exploit conditions apply.


The Criteria for a More Useful Scanner

A vulnerability tool that provides actionable output for container environments should be evaluated against these criteria:

Does it distinguish between installed and executing packages? Runtime profiling data that identifies which packages load during application execution is the primary filter for reducing noise. Tools that provide this distinction dramatically reduce the finding list to relevant findings.

Does it provide reachability analysis? Call graph analysis that identifies whether the vulnerable function is in the application’s execution path is the next level of filtering. Not all tools provide this; it’s worth knowing whether your tool does.

Does it integrate with runtime data in addition to static inventory? Container security software that combines static SBOM generation with runtime execution profiling produces a more accurate picture than static-only analysis.

Does it support automated remediation, not just finding generation? A tool that identifies that unused packages can be removed and generates a hardened image without those packages converts findings into remediation automatically. Tools that only generate findings require manual follow-through on every finding.

Does it provide EPSS data alongside CVSS? EPSS indicates exploitation probability in the real world. Tools that provide both severity (CVSS) and exploitation probability (EPSS) give teams better prioritization data than severity alone.

Does it support VEX document generation? VEX assertions—documenting that a CVE is “not affected” with technical justification—allow downstream consumers of your SBOMs to filter non-applicable findings. Tools that support VEX generation make your security documentation more useful to partners and auditors.


The Impact of the Noise

Security teams working with scanners that miss runtime context spend significant time on the wrong problems. Triage cycles consumed by CVEs in non-executing packages are cycles not spent on CVEs in the executing packages that matter. Engineers assigned tickets for non-exploitable CVEs lose confidence in the vulnerability management program over time.

The noise cost is not just efficiency—it’s organizational credibility. A vulnerability program that produces findings engineers know to be irrelevant loses the cooperation of the engineering team. The program continues running; the remediation stops happening.


Practical Steps for Better Scanner Evaluation

Run a pilot with execution profiling enabled. If your current scanner supports runtime profiling, enable it for a representative set of containers and compare the finding list before and after filtering by execution context. The reduction percentage is the noise level your current program is operating at.

Ask vendors specifically about execution context. In scanner evaluations, ask: “How does your tool differentiate between CVEs in packages that execute at runtime versus packages that are installed but never loaded?” A vendor who can’t answer this question clearly is selling a static-only scanner.

Measure the false positive rate empirically. Take a sample of your current scanner findings for one container, investigate each finding to determine whether the package executes at runtime, and calculate the percentage that are in non-executing packages. This is your empirical false positive rate—the fraction of scanner output that requires triage with no security benefit.

Evaluate both detection and remediation capability. A scanner that detects accurately but requires manual remediation on every finding is less valuable than a scanner with equivalent detection plus automated hardening capability. Evaluate the complete workflow, not just the detection step.


Frequently Asked Questions

Why does a vulnerability scanner miss what actually matters?

A vulnerability scanner accurately reports every CVE in every installed package, but it cannot determine which of those packages execute when the container runs, which vulnerable functions are reachable from application code, or whether exploit conditions apply to your specific deployment. The result is a finding list that treats CVEs in never-loaded packages identically to CVEs in actively-used ones—producing noise that consumes triage capacity without improving security.

What is the difference between installed packages and executing packages in container vulnerability scanning?

A container image may have 400 installed packages while a specific application loads only 40 of them at runtime. The remaining 360 are installed as OS dependencies or build artifacts but never execute during application operation. A vulnerability scanner that doesn’t incorporate runtime execution data reports CVEs from all 400 packages with equal severity, making it impossible to distinguish exploitable findings from unreachable ones without manual investigation of each.

How does runtime execution context improve vulnerability scanner output?

When a vulnerability scanner incorporates runtime execution profiling—data about which packages actually load during application operation—it can filter the finding list to CVEs in packages that the application genuinely uses. This typically reduces the reported finding list by 70–90% for general-purpose container images, leaving only the CVEs that represent actual exploitable exposure. The remaining findings are both more accurate and more actionable for engineering teams.

What is EPSS and how does it improve on CVSS for vulnerability prioritization?

CVSS (Common Vulnerability Scoring System) measures the theoretical severity of a vulnerability based on its characteristics. EPSS (Exploit Prediction Scoring System) measures the probability that a vulnerability will be exploited in the real world within the next 30 days. Using both together gives security teams better prioritization data: a high-CVSS vulnerability with low EPSS is theoretically severe but rarely exploited in practice, while a moderate-CVSS vulnerability with high EPSS represents a more urgent remediation priority.


Scanners as a Starting Point

A good vulnerability scanner is necessary but not sufficient. It provides the raw inventory input that security programs need to function. The program becomes effective when that inventory is filtered by execution context, prioritized by exploitation probability and reachability, and connected to automated remediation for the findings that can be addressed without manual involvement.

The scanner that tells you everything that’s installed in a container is the beginning of the story. The tool that tells you which of those installations represent actual risk in your running application—and removes the ones that don’t—is the complete story.

By Admin