The Cost of 'Info' Severity: Are We Ignoring the Real Killers?
Just read the analysis on the 25M+ security alert dataset, and the findings are a bit gut-wrenching. The report suggests that enterprises are effectively missing roughly one actual threat per week because we've collectively decided to stop looking at low-severity and informational alerts.
I can’t say I’m surprised—between SIEM ingestion costs and massive alert fatigue, "tuning" often defaults to "suppressing." However, this "institutionalized not looking" is a dangerous habit. We frequently see indicators of compromise (IOCs) starting as low-volume chatter that gets dismissed as noise. For instance, a single scheduled task creation might be informational, but paired with a specific user agent string, it's a game over.
We need to move away from static severity scores and start leveraging correlation logic to surface these "boring" events. Here is a basic KQL query I've been using to hunt for seemingly benign PowerShell activity that often gets ignored:
DeviceProcessEvents
| where Timestamp > ago(7d)
| where ProcessName contains "powershell.exe"
| where FileName in ("whoami.exe", "ipconfig.exe", "nslookup.exe")
| summarize count(), make_set(CommandLine) by DeviceName, AccountName
| where count_ > 5
If we rely solely on vendor severity, we'll keep missing the slow-and-low attacks that dwell in our environments.
How is your SOC handling this balance? Are you automating the triage of low-severity alerts with SOAR, or are you still manually digging through the noise?
This resonates deeply. As a SOC lead, I struggle to justify to management why we need to investigate 'Informational' alerts when they see our MTTR (Mean Time to Respond) looking good on high-priority stuff. We've started using Sentinel's Fusion rules to automatically correlate low-fidelity signals. For example, a failed logon (low) + a successful logon from a new country (medium) = Critical incident. It's the only way to catch the stuff that slips through the cracks without hiring 50 more analysts.
From a pentester's perspective, we love it when you ignore low-severity alerts. I've maintained persistence on engagements for months just by using LOLBins (Living Off The Land Binaries) that trigger generic informational events. If you aren't correlating specific command-line arguments in your EDR, you're blind. Great post on highlighting this gap.
I've given up on trying to catch everything via alerts. Instead, I focus heavily on reducing the attack surface so that 'low severity' events can't hurt us as much. If a user gets phished but they don't have local admin rights (and we have strict app whitelisting via AppLocker), the 'low severity' process execution is a non-event. Detection is good, but prevention is better.
The danger is real, especially in K8s where 'Info' logs often hide privilege escalation attempts. We don't ignore them; we enrich them automatically. By correlating Info-level exec events with service account anomalies, we catch lateral movement without drowning analysts. Here's a basic KQL query we use to flag suspicious interactive shells in quiet namespaces:
KubeAuditLogs
| where Verb == "create" and ObjectRef.kind == "Pod"
| extend isInteractive = RequestObject.spec.containers[0].command contains_any("sh", "bash")
| where isInteractive == true
Spot on. We exploit the fatigue around Info alerts constantly. Instead of trying to triage everything, try a 'low and slow' hunting query. Look for Informational events that are statistically rare in your environment—usually, the good stuff is hiding in the frequency anomalies.
SecurityEvent
| where EventLevelName == "Informational"
| summarize Count = count() by EventID, Computer, Account
| where Count <= 2
| project Computer, Account, EventID
It’s amazing what you find when you look for what *isn't* happening rather than what is.
From an IAM perspective, the danger is hiding in identity lifecycle events. 'New Device Registration' or 'MFA Reset' are often just Info-level logs, yet they provide the keys to the kingdom for a patient attacker.
We've started correlating these benign-sounding events with specific risk indicators. For example, flagging when a new device is registered immediately following a failed login attempt:
AuditLogs
| where OperationName == "Add device"
| join kind=inner (SigninLogs | where Result == "Failure") on UserId
Identity hygiene shouldn't be a casualty of alert tuning.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access