Back to Intelligence

Zealot PoC: Detecting Autonomous AI-Driven Cloud Attack Chains

SA
Security Arsenal Team
April 23, 2026
6 min read

The cybersecurity landscape has shifted from automated scripts to autonomous agents. The recent "Zealot" proof-of-concept (PoC) demonstrates that Large Language Model (LLM)-powered agents are no longer theoretical tools for productivity but viable offensive weapons capable of executing full-scope cloud attacks. In controlled testing, Zealot identified vulnerabilities, formulated exploit strategies, and executed them—moving from reconnaissance to persistence at a speed that outpaces human Incident Response (IR) teams.

For defenders, the critical takeaway is the compression of the OODA (Observe, Orient, Decide, Act) loop. An AI agent does not suffer from fatigue or analysis paralysis. If your defense relies solely on human analysts triaging alerts in real-time, you are already vulnerable to this class of automated threat. We must move from reactive alerting to behavioral anomaly detection and automated containment.

Technical Analysis

Affected Products/Platforms:

  • Cloud Service Providers (CSPs) - The PoC specifically demonstrated capabilities within cloud environments (e.g., AWS, Azure, or GCP) leveraging API structures.
  • Management Consoles/CLI: Attacks are executed via the control plane APIs (e.g., AWS IAM, S3, EC2 APIs).

CVE Identifiers:

  • N/A (This is a threat capability/PoC framework, not a specific software vulnerability).

Attack Mechanics (Defender Perspective): Zealot functions as an autonomous agent that utilizes an LLM to interpret the security state of a cloud environment. Unlike static scripts, it dynamically adjusts its attack path based on the data it retrieves.

  1. Reconnaissance (The "Look"): The agent queries cloud metadata services (GetUserPolicy, DescribeInstances, ListBuckets) to build a real-time understanding of the attack surface.
  2. Decision (The "Think"): It passes this data to an AI model to identify misconfigurations (e.g., overly permissive IAM roles, public S3 buckets).
  3. Action (The "Act"): It generates and executes API calls to exploit these findings (e.g., AttachUserPolicy, PutBucketPolicy) to establish persistence or exfiltrate data.

Exploitation Status:

  • Proof of Concept (PoC): Currently demonstrated in research environments ("staged cloud attacks").
  • Active Threat: While not observed in the wild as a specific named malware campaign yet, the techniques (API chaining and misconfig exploitation) are standard TTPs for attackers, now accelerated by AI reasoning.

Detection & Response

Detecting autonomous agents like Zealot requires identifying the "intent" and "velocity" of actions rather than just specific malicious signatures. A human attacker clicks slowly; a script repeats a pattern; an AI agent behaves intelligently but at super-human speeds.

Sigma Rules

The following rules focus on the behavioral indicators of an autonomous agent: high-volume API reconnaissance immediately followed by privilege modification or resource creation.

YAML
---
title: AI Agent Behavior - High Velocity Recon followed by Privilege Escalation
id: 9c8d7e6a-5b4c-4a3d-8e2f-1a2b3c4d5e6f
status: experimental
description: Detects potential autonomous agent activity by identifying a burst of read-only reconnaissance actions (Describe/List) followed immediately by write actions (Attach/Create/Put) within a very short time window, characteristic of AI-driven decision loops.
references:
  - https://www.darkreading.com/cyber-risk/zealot-shows-ai-execute-full-cloud-attacks
author: Security Arsenal
date: 2024/05/23
tags:
  - attack.execution
  - attack.t1059.001
logsource:
  product: aws
  service: cloudtrail
detection:
  selection_read:
    eventName|contains:
      - 'Describe'
      - 'List'
      - 'Get'
  selection_write:
    eventName|contains:
      - 'Attach'
      - 'Create'
      - 'Put'
      - 'Modify'
  timeframe: 5m
  condition: selection_read and selection_write | count() > 10
falsepositives:
  - Legitimate infrastructure-as-code (IaC) deployment tools like Terraform or CloudFormation
level: high
---
title: Unusual IAM Policy Creation via Console/CLI
id: 1a2b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d
status: experimental
description: Detects the creation of a new IAM policy or version, a common persistence mechanism, where the user agent indicates a script or tool rather than a standard browser session.
references:
  - https://attack.mitre.org/techniques/T1098/
author: Security Arsenal
date: 2024/05/23
tags:
  - attack.persistence
  - attack.t1098.003
logsource:
  product: aws
  service: cloudtrail
detection:
  selection:
    eventName|contains:
      - 'CreatePolicy'
      - 'PutUserPolicy'
      'PutRolePolicy'
  filter:
    userAgent|contains:
      - 'console.amazonaws.com'
  condition: selection and not filter
falsepositives:
  - Admin automation scripts
level: medium

KQL (Microsoft Sentinel)

This query hunts for identities that exhibit "chaining" behavior—querying a resource and then modifying it faster than a typical human operator.

KQL — Microsoft Sentinel / Defender
// Hunt for autonomous agent-like API chaining
let TimeWindow = 10m;
let LookBack = 1h;
AWSCloudTrail
| where TimeGenerated > ago(LookBack)
| extend EventCategory = case(
  eventName contains "Describe" or eventName contains "List" or eventName contains "Get", "Read",
  eventName contains "Create" or eventName contains "Attach" or eventName contains "Modify" or eventName contains "Put", "Write",
  "Other"
)
| where EventCategory in ("Read", "Write")
| summarize ReadCount = countif(EventCategory == "Read"), WriteCount = countif(EventCategory == "Write"), EventList = make_list(eventName, 100) by UserIdentityArn, bin(TimeGenerated, TimeWindow)
| where ReadCount > 5 and WriteCount > 0
| project UserIdentityArn, TimeGenerated, ReadCount, WriteCount, EventList
| order by WriteCount desc

Velociraptor VQL

If the AI agent is operating from a compromised endpoint within your network (e.g., a developer workstation running a malicious script), we can hunt for the parent processes invoking the cloud CLI tools.

VQL — Velociraptor
-- Hunt for processes utilizing AWS/Azure CLI tools potentially scripted by AI
SELECT Pid, Name, CommandLine, Exe, ParentPid, Username, CreateTime
FROM pslist()
WHERE Name =~ 'aws' OR Name =~ 'az' OR Name =~ 'python'
  AND (CommandLine =~ 's3 cp' OR CommandLine =~ 'iam attach' OR CommandLine =~ 'boto3')
  AND Exe NOT IN ('C:\Program Files\Amazon\AWSCLIV2\aws.exe', '/usr/local/bin/aws', '/usr/bin/az')
-- Exclude standard installation paths to focus on portable/script executions

Remediation Script

Use this Bash script to audit common misconfigurations that autonomous agents like Zealot will immediately target.

Bash / Shell
#!/bin/bash
# Remediation/Hardening Script: Mitigate AI-Autonomous Attack Vectors
# Focus: AWS IAM and S3 Common Misconfigurations

echo "[+] Starting Cloud Security Audit for AI-Agent Threat Vectors..."

# 1. Check for S3 buckets with public read/write access (Block Public Access status)
echo "[+] Checking S3 Bucket Public Access Settings..."
for bucket in $(aws s3 ls | awk '{print $3}'); do
  public_access_block=$(aws s3api get-public-access-block --bucket "$bucket" 2>/dev/null)
  
  if echo "$public_access_block" | grep -q '"IgnorePublicAcls": false'; then
    echo "[!] WARNING: Bucket $bucket has public access potentially enabled."
    # Auto-remediation (Uncomment to enforce):
    # aws s3api put-public-access-block --bucket $bucket --public-access-block-configuration "IgnorePublicAcls=true,BlockPublicPolicy=true"
  fi
done

# 2. Audit for IAM users with inline policies (often used for quick privilege escalation)
echo "[+] Auditing IAM Users for Inline Policies..."
aws iam list-users --query 'Users[*].UserName' --output text | tr '\t' '\n' | while read user; do
  policies=$(aws iam list-user-policies --user-name "$user" --query 'PolicyNames' --output text)
  if [ -n "$policies" ]; then
    echo "[!] User '$user' has inline policies. Review for least privilege."
  fi
done

echo "[+] Audit Complete."

Remediation

Defending against autonomous AI threats requires removing the opportunities for "low-hanging fruit" exploitation that AI agents prioritize.

  1. Implement Strict IAM Conditions: Enforce Attribute-Based Access Control (ABAC) and strict conditions on IAM roles. Require Multi-Factor Authentication (MFA) for sensitive API calls, even for service accounts, where possible.

  2. API Rate Limiting and Anomaly Detection: Configure AWS GuardDuty or Azure Sentinel to trigger automated blocks (e.g., via Lambda Functions) on API rates that exceed human capabilities (e.g., >50 distinct read/write API calls per minute from a single identity).

  3. Infrastructure as Code (IaC) Drift Detection: Since AI agents modify infrastructure rapidly, use tools like Terraform Sentinel or OPA to detect and auto-revert any configuration change that was not made via your approved IaC pipeline.

  4. Network Egress Controls: Ensure the endpoints orchestrating these attacks cannot reach command-and-control servers. Strictly limit outbound internet access for management workstations.

Related Resources

Security Arsenal Red Team Services AlertMonitor Platform Book a SOC Assessment pen-testing Intel Hub

penetration-testingred-teamoffensive-securityexploitvulnerability-researchai-threatscloud-securityaws

Is your security operations ready?

Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.