Introduction
The healthcare sector is witnessing a paradigm shift as physicians increasingly adopt agentic AI tools like Anthropic's Claude Code to build custom clinical applications. While this "doctor-led software development" accelerates innovation, it creates a critical blind spot for security teams. By bypassing traditional SDLC controls—code reviews, SAST/DAST scans, and infrastructure governance—clinicians are inadvertently introducing potential vulnerabilities directly into production environments. Defenders must act immediately to detect and govern this shadow AI development before it leads to compliance failures or exploitable vulnerabilities in patient care systems.
Technical Analysis
Affected Products & Platforms:
- Tools: Anthropic Claude Code, other agentic AI coding assistants (e.g., Cursor, Copilot Workspace).
- Platforms: Clinician workstations (Windows/macOS), development servers accessed via SSH/RDP, and cloud IDEs.
The Threat Vector (Shadow AI):
Unlike traditional malware, this threat stems from authorized personnel (physicians) using unauthorized tools (AI agents) to generate and deploy code without oversight.
-
Attack Chain:
- Initialization: A physician installs an AI coding agent (e.g., via
npm install -g @anthropic-ai/claude-codeor direct binary download) on a managed endpoint. - Generation: The agent generates Python, JavaScript, or Bash scripts based on natural language prompts. This code often lacks input sanitization or hardcoded secrets management.
- Deployment: The generated scripts are executed or moved to internal servers to interact with EHR APIs (Epic, Cerner) or patient databases.
- Exploitation: An attacker exploiting the AI-generated vulnerability (e.g., SQL Injection in a clinician-made dashboard) gains access to PHI (Protected Health Information).
- Initialization: A physician installs an AI coding agent (e.g., via
-
Exploitation Status:
- Current Stage: Active adoption (insider threat/negligence vector).
- Vulnerabilities: While no specific CVE exists for "Claude Code" itself, the output often contains vulnerabilities corresponding to high-severity CVEs (e.g., CVE-2023-28362 regarding insecure deserialization if the AI suggests legacy libraries).
Detection & Response
━━━ DETECTION CONTENT ━━━
Sigma Rules
---
title: Potential Agentic AI Tool Execution - Claude Code
id: 8a4b2c91-1f3e-4a5d-9b6c-3d4e5f6a7b8c
status: experimental
description: Detects the execution of known agentic AI coding assistants like Claude Code or similar binaries on endpoints. This indicates potential shadow development activity.
references:
- https://www.healthcareitnews.com/news/ai-may-be-approaching-new-phase-healthcare-two-fronts
author: Security Arsenal
date: 2025/04/08
tags:
- attack.execution
- attack.t1059.001
logsource:
category: process_creation
product: windows
detection:
selection:
Image|contains:
- 'claude'
- 'cursor'
- 'aider'
OriginalFileName|contains:
- 'claude'
- 'cursor'
condition: selection
falsepositives:
- Authorized developer usage (verify against user group)
level: medium
---
title: Scripting Engine Execution by Non-Engineering Users
id: 9c5d3e02-2g4f-5b6e-0c7d-4e5f6g7h8i9j
status: experimental
description: Detects execution of Python or Node.js scripts by users outside of Engineering/IT groups, a common indicator of physician-led AI development.
references:
- https://www.healthcareitnews.com/news/ai-may-be-approaching-new-phase-healthcare-two-fronts
author: Security Arsenal
date: 2025/04/08
tags:
- attack.execution
- attack.t1059.006
logsource:
category: process_creation
product: windows
detection:
selection_script:
Image|endswith:
- '\python.exe'
- '\node.exe'
- '\python3.exe'
filter_engineering:
Sid|contains:
- 'S-1-5-21-...- Engineering Group SID' # Modify to match local SID structure
condition: selection_script and not filter_engineering
falsepositives:
- Data science teams using Python for legitimate analysis
level: low
**KQL (Microsoft Sentinel / Defender)**
// Hunt for suspicious AI agent command line usage
DeviceProcessEvents
| where Timestamp > ago(7d)
| where ProcessCommandLine has "claude"
or ProcessCommandLine has "anthropic"
or ProcessCommandLine has "agentic"
| where InitiatingProcessAccountName !in ("Admin", "svc_devops")
| project Timestamp, DeviceName, AccountName, ProcessCommandLine, FolderPath
| order by Timestamp desc
**Velociraptor VQL**
-- Hunt for AI coding agent binaries and recent script execution
SELECT
Pid,
Name,
Exe,
CommandLine,
Username
FROM pslist()
WHERE Name =~ "claude"
OR CommandLine =~ "claude"
OR Name =~ "Cursor"
**Remediation Script (PowerShell)**
# Audit and Restrict Unauthorized AI Development Tools
Write-Host "[+] Auditing for Agentic AI Tools..." -ForegroundColor Cyan
# Define risky tool names (add based on environment)
$riskTools = @("claude", "cursor", "aider", "copilot-cli")
# Check running processes
$foundProcesses = Get-Process | Where-Object { $riskTools -like "*$($_.ProcessName)*" }
if ($foundProcesses) {
Write-Host "[!] ALERT: Agentic AI tools detected running:" -ForegroundColor Red
$foundProcesses | Format-Table Id, ProcessName, Path -AutoSize
# Optional: Kill process if policy dictates strict enforcement
# $foundProcesses | Stop-Process -Force -WhatIf
} else {
Write-Host "[-] No known AI agent processes running." -ForegroundColor Green
}
# Check common installation paths
$pathsToCheck = @("C:\Users\*\AppData\Local", "C:\Program Files")
foreach ($tool in $riskTools) {
$installs = Get-ChildItem -Path $pathsToCheck -Filter "*$tool*" -Recurse -ErrorAction SilentlyContinue
if ($installs) {
Write-Host "[!] Found installation artifacts for $tool" -ForegroundColor Yellow
$installs.FullName
}
}
Remediation
To mitigate the risks associated with doctor-led AI development while enabling innovation, implement the following controls:
-
Governance & Policy:
- Update Acceptable Use Policy (AUP) to explicitly require approval for AI code generation tools handling PHI or connecting to production networks.
- Establish an "AI Review Board" within the Security/Architecture team to vet requests for agentic AI tools.
-
Network Segmentation & Egress Control:
- Restrict direct API access from clinical workstations to public AI model providers (e.g.,
api.anthropic.com) unless via a secure corporate proxy. - Force all AI-generated code traffic through an authenticated SSL inspection proxy to log prompts and outputs.
- Restrict direct API access from clinical workstations to public AI model providers (e.g.,
-
Technical Enforcement:
- Application Whitelisting: Use AppLocker or Windows Defender Application Control (WDAC) to block the execution of unsigned AI agent binaries (like
claude.exe) on non-dev endpoints. - Sandboxing: If clinicians must use these tools, provision a VDI environment isolated from the production EHR network.
- Application Whitelisting: Use AppLocker or Windows Defender Application Control (WDAC) to block the execution of unsigned AI agent binaries (like
-
Audit & Compliance:
- Conduct immediate scans for unauthorized Python/Node.js environments on clinical workstations.
- Integrate SAST (Static Application Security Testing) into any workflows where clinicians submit code for deployment.
Related Resources
Security Arsenal Healthcare Cybersecurity AlertMonitor Platform Book a SOC Assessment healthcare Intel Hub
Is your security operations ready?
Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.