Back to Intelligence

How to Defend Against AI DNS Exfiltration Flaws in Amazon Bedrock and LangSmith

SA
Security Arsenal Team
March 22, 2026
4 min read

How to Defend Against AI DNS Exfiltration Flaws in Amazon Bedrock and LangSmith

The rapid adoption of Generative AI has introduced a new attack surface for security teams. While organizations focus on prompt injection and model poisoning, a more fundamental vulnerability has emerged in the infrastructure hosting these models.

Recent research by BeyondTrust has disclosed critical flaws in major AI platforms, including Amazon Bedrock, LangSmith, and SGLang. These vulnerabilities allow attackers to bypass sandbox restrictions using Domain Name System (DNS) queries. For defenders, this means that even "isolated" code execution environments within AI services can be leveraged for interactive shell access and data theft.

This post details the mechanics of this threat and provides the necessary detection queries and remediation steps to secure your AI stack.

Technical Analysis

The core issue lies in the network permissions granted to the code execution sandboxes within these AI platforms. Specifically, the AgentCore Code Interpreter in Amazon Bedrock, along with environments in LangSmith and SGLang, permits outbound DNS queries.

While these environments are designed to restrict direct internet access (e.g., blocking HTTP/HTTPS), DNS is often allowed for resolution purposes. Attackers can exploit this trust by encoding sensitive data into subdomains (DNS Tunneling) and sending queries to a malicious name server they control.

  • Attack Vector: Outbound DNS queries used for data exfiltration and command-and-control (C2).
  • Impact: Unauthorized data transfer from AI environments and potential execution of arbitrary code via interactive shells.
  • Affected Systems: Amazon Bedrock (AgentCore), LangSmith, SGLang.
  • Severity: High. This bypasses standard egress filtering and compromises the confidentiality of data processed by AI agents.

Defensive Monitoring

To detect this activity, security teams must monitor for characteristics of DNS tunneling. Standard firewall logs may not flag this if DNS is generally allowed.

Use the following KQL query for Microsoft Sentinel to identify suspicious DNS patterns—specifically high-entropy domain names and long query lengths, which are strong indicators of data exfiltration via DNS.

Script / Code
let HighEntropyThreshold = 3.5;
let TimeLookback = 24h;
DnsEvents
| where TimeGenerated > ago(TimeLookback)
| where QueryTypeName in ("A", "TXT", "CNAME", "MX")
| extend QueryLength = strlen(QueryName)
| where QueryLength > 45 // Filter for longer queries often used in tunneling
| parse QueryName with * "." Hostname "." TLD "*"
| extend EntropyScore = entropy(tostring(split(QueryName, ".")[0]))
| where EntropyScore > HighEntropyThreshold
| summarize Count = count(), ListOfQueries = make_set(QueryName) by SourceIP, Subnet, Bin(TimeGenerated, 10m)
| where Count > 15 // Threshold for frequency of suspicious queries
| project TimeGenerated, SourceIP, Subnet, Count, EntropyScore, ListOfQueries
| order by Count desc


**Note:** If your organization utilizes these specific AI platforms, correlate the `SourceIP` ranges associated with your AWS VPC endpoints or the specific egress IPs of your SaaS AI tools to prioritize alerts.

Remediation

Effective defense requires reducing the ability of sandboxes to communicate arbitrarily via DNS. Security Arsenal recommends the following immediate actions:

  1. Restrict Outbound DNS Access: Configure your cloud security groups and firewalls (e.g., AWS VPC Security Groups) to block outbound UDP/TCP port 53 traffic from AI execution environments. Force these environments to use a specific, internal forwarder if resolution is absolutely required.

  2. Implement DNS Filtering: Use a DNS security solution (e.g., AWS Route 53 Resolver DNS Firewall) to inspect and block queries to known-malicious domains or domains that do not conform to your organization's namespace policy.

  3. Disable Non-Essential Code Interpreters: Until patches are fully verified, review the usage of "Code Interpreter" or "Agent" features in Amazon Bedrock, LangSmith, and SGLang. If these features are not critical for business operations, disable them via the respective admin consoles.

  4. Network Segmentation: Ensure AI workloads are isolated in dedicated subnets with strict egress rules. Apply a "Zero Trust" approach where the AI environment can only connect to specific, necessary API endpoints, not the general internet.

Related Resources

Security Arsenal Incident Response Services AlertMonitor Platform Book a SOC Assessment incident-response Intel Hub

incident-responseransomwareforensicsai-securitydns-exfiltrationamazon-bedrockcloud-securitythreat-intel

Is your security operations ready?

Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.