NVD Backlog Crisis: NIST Rations CVE Enrichment After 263% Surge
Just saw the update from NIST regarding the National Vulnerability Database (NVD). With a reported 263% spike in vulnerability submissions, they are officially throttling the enrichment process for CVEs that don't meet specific high-priority criteria.
Effectively, this means we’re going to see a lot more 'bare-bones' entries—just the ID and a brief description—without the critical CVSS v3.1 scores or detailed CPE mappings we rely on for automated triage.
This is a potential nightmare for automated scanners and SIEM rules that trigger based on severity scores. If your ingestion pipeline expects a metrics object in every API response, you might be in for some broken parsers. I whipped up a quick Python snippet to test against the current API to see how we might handle missing enrichment data:
import requests
def check_cve_status(cve_id):
url = f"https://services.nvd.nist.gov/rest//cves/2.0?cveId={cve_id}"
try:
response = requests.get(url, timeout=10)
data = response.()
vuln = data['vulnerabilities'][0]['cve']
# Check for CVSS v3.1 metrics
if 'cvssMetricV31' in vuln['metrics']:
score = vuln['metrics']['cvssMetricV31'][0]['cvssData']['baseScore']
return f"Enriched - Score: {score}"
else:
return "WARNING: Unenriched - No CVSS Score Found"
except Exception as e:
return f"Error: {str(e)}"
print(check_cve_status("CVE-2026-1234"))
We are likely going to need to lean more heavily on vendor advisories or third-party feeds (like VulnDB or GitHub Advisory Database) to fill these gaps.
How are you planning to adjust your vulnerability management workflow to handle these 'flat' CVE entries? Are you going to default to a conservative 'High' score for unenriched entries?
This is going to break a couple of my SOAR playbooks that specifically trigger on CVSS severity thresholds. If the enrichment is missing, the playbook just stalls. I'm rewriting the logic now to catch KeyError on the metrics object and flag the CVE for manual review, defaulting to a 'Critical' status until we have more info. Better safe than sorry, even if it increases the alert volume temporarily.
I've noticed the NVD API latency getting worse over the last few months anyway. We started shifting our internal scanner correlation to use the GitHub Advisory Database as a primary source a while ago—it's usually faster for open-source libs. For enterprise software, we are having to rely more directly on vendor bulletins. It seems the 'single source of truth' era for the NVD is officially over.
From a pentesting perspective, this makes confirming exploitability harder without the CPE data attached. Often I use NVD data to quickly identify if a specific service version is vulnerable without digging through 10 different vendor PDFs. I guess we're going back to manual verification. I'm curious what the specific 'criteria' are for enrichment—they weren't very clear in the announcement.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access