ForumsExploitsAWS Bedrock & Agentic Risks: Breakdown of the 8 New Attack Vectors

AWS Bedrock & Agentic Risks: Breakdown of the 8 New Attack Vectors

Vuln_Hunter_Nina 3/23/2026 USER

Just saw the report on The Hacker News regarding eight new attack vectors identified in AWS Bedrock. It highlights a scary reality of "Agentic AI"—when we give LLMs the keys to the castle (Salesforce, Lambda, SharePoint), we are basically creating a new, unpredictable interface for our data.

The research focuses on how attackers can manipulate the "tools" connected to the AI agent. If the agent has permission to query a database or trigger a workflow via an API, a malicious prompt could trigger unintended actions. This isn't just about leaking the system prompt anymore; it's about leveraging the model's agency to pivot.

Two of the highlighted CVEs stood out to me:

  • CVE-2026-04551: Knowledge Base poisoning leading to remote code execution (RCE) via connected Lambda functions.
  • CVE-2026-04554: Privilege escalation through ambiguous tool definitions.

The core issue is that the IAM roles attached to these Bedrock Agents are often overly permissive because devs want the AI to 'just work.' If an agent is compromised, it inherits those permissions.

I wrote a quick Python snippet to audit attached roles for wildcard permissions. It's a starting point, but we need to be granular:

import boto3

def audit_bedrock_agent_roles():
    client = boto3.client('bedrock-agent')
    iam = boto3.client('iam')
    
    # List all agents
    agents = client.list_agents()
    
    for agent_summary in agents['agentSummaries']:
        agent_id = agent_summary['agentId']
        details = client.get_agent(agentId=agent_id)
        role_arn = details['agent']['agentResourceRoleArn']
        role_name = role_arn.split('/')[-1]
        
        # Check for inline policies (simplified)
        try:
            policies = iam.list_role_policies(RoleName=role_name)
            for p in policies['PolicyNames']:
                policy_doc = iam.get_role_policy(RoleName=role_name, PolicyName=p)['PolicyDocument']
                if 'Action' in str(policy_doc) and '*' in str(policy_doc['Action']):
                    print(f"[!] Wildcard actions found in role: {role_name} (Agent: {agent_id})")
        except Exception as e:
            print(f"Error checking {role_name}: {e}

audit_bedrock_agent_roles()

Has anyone started implementing strict input validation or specific Guardrails for these tool invocations? I'm curious how others are handling the 'Agent' problem without breaking functionality.

MS
MSP_Tech_Dylan3/23/2026

Great breakdown. We’ve been treating Bedrock Agents just like any other service principal—if the role has s3:* or lambda:InvokeFunction on *, it fails our compliance checks.

However, detection is the harder part. We are seeing InvokeAgent calls in CloudTrail that look legitimate. We've started correlating the sessionId in CloudTrail with the downstream service calls (e.g., InvokeFunction). If a single session triggers multiple unrelated tools within seconds, we flag it as potential prompt injection.

PR
Proxy_Admin_Nate3/23/2026

From a pentester's perspective, the 'tool use' feature is a gold mine. Standard prompt injection focuses on getting the model to say something it shouldn't. Here, you get the model to do something it shouldn't.

The 'Knowledge Base' vector is particularly nasty. If you can upload a malicious PDF to an S3 bucket that the agent indexes, you can essentially hide instructions in the metadata that the agent executes every time it retrieves that context. It's XSS, but for backend workflows.

Verified Access Required

To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.

Request Access

Thread Stats

Created3/23/2026
Last Active3/23/2026
Replies2
Views160