Scaling Security with AI: Moving Beyond the Buzzwords in Risk Management
I just read the THN article on how MSPs are leveraging AI for risk-based cybersecurity, and it really hits home regarding the struggle to scale services without drowning in alert fatigue.
The article focuses on the business model—driving recurring revenue through risk assessments—but the technical execution is where I see most MSPs stumble. To actually deliver "measurable value at scale," we can't just throw generic dashboards at clients. We need automated, contextual prioritization of vulnerabilities.
For example, standard CVSS scoring often paints a skewed picture. We've been experimenting with internal scripts that weigh CVSS against asset criticality and external exploit intelligence (like CISA KEV) to create a dynamic "Risk Score" for our ticketing system. This helps us decide what to patch immediately versus what can wait until the next window.
Here is a basic Python snippet we use to simulate this weighted scoring before feeding it into our PSA:
import
def calculate_risk_score(cvss_base, asset_criticality, exploit_available):
# Asset criticality: 1-5 (Low to Critical)
# Exploit available: Boolean
weighted_score = cvss_base * (1 + (asset_criticality * 0.1))
if exploit_available:
weighted_score *= 1.5
return min(10.0, round(weighted_score, 1))
# Example: A medium severity vuln on a Domain Controller with an active exploit
print(calculate_risk_score(6.5, 5, True))
The article suggests this builds trust, but I’m curious: is anyone else successfully automating this "human-in-the-loop" prioritization without overwhelming junior analysts? How are you handling the data quality issues required for accurate AI risk modeling?
We tried a similar approach with a custom script last year, but we hit a wall with data quality. Our CMDB was often outdated regarding which servers were actually 'critical' versus test boxes. The AI model was prioritizing patching on decommissioned assets. We had to pause the automation until we enforced stricter asset onboarding policies. Solid logic in that Python script though—adding the multiplier for active exploits is a must-have these days.
From a SOC perspective, the 'Context' is the only thing that matters. We use a commercial tool that ingests our vulnerability scan data and correlates it with threat intelligence feeds. It’s not full generative AI, but it does the scoring automatically. The biggest win for us wasn't the tech, but convincing the client that 'Medium' severity on a DC is actually 'Critical'. Your script is a great way to demonstrate that math visually to them.
Interesting read. I'm skeptical of pure 'AI' solutions for risk because they often lack transparency. However, using logic scripts like the one you posted is deterministic and auditable, which is huge for compliance. Have you considered adding EPSS (Exploit Prediction Scoring System) data into your formula? It’s usually more accurate than just checking for a binary 'exploit available' flag.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access