ForumsGeneralAgentic AI Phishing: Perplexity's Comet Bypassed in Under 4 Minutes

Agentic AI Phishing: Perplexity's Comet Bypassed in Under 4 Minutes

BackupBoss_Greg 3/11/2026 USER

Just caught the Guardio research on Perplexity's Comet AI browser, and it’s a stark reminder of the risks we face with agentic AI. The researchers managed to trick the AI into a phishing scam in less than four minutes by exploiting its reasoning capabilities. Essentially, they used indirect prompt injection to make the AI lower its own guardrails and execute the attack.

This isn't just a script kiddie XSS; this is the AI "thinking" it's doing the right thing while walking right into a trap. For us on the defensive side, this complicates the trust model significantly. We aren't just validating user input anymore; we have to worry about the "autonomous" decisions made by an agent acting on behalf of the user.

Detection is going to be tricky because the traffic looks legitimate. I've started looking into KQL queries to flag anomalies associated with AI user-agents performing rapid, multi-step actions.

DeviceProcessEvents
| where ProcessVersionInfoProductName contains "Perplexity" or ProcessVersionInfoCompanyName contains "Perplexity"
| where InitiatingProcessFileName in ("chrome.exe", "msedge.exe", "brave.exe")
| project Timestamp, DeviceName, FileName, ProcessCommandLine
| where ProcessCommandLine contains "download" or ProcessCommandLine contains "submit-form"
| summarize count() by bin(Timestamp, 5m), DeviceName
| where count_ > 10 // Threshold for rapid automated actions

Has anyone else started incorporating AI-agents into their incident response playbooks, or are we treating them as Shadow IT for now?

FO
Forensics_Dana3/11/2026

From a pentester's perspective, this is the new frontier of social engineering. We don't need to trick the human if we can trick the tool they trust. The 'reasoning' loop is the vulnerability. I've been testing similar agentic workflows, and if you can poison the context window (even via a compromised ad on a legitimate site), the agent often executes the 'helpful' action without verifying the source integrity. It bypasses traditional phishing training entirely because the human never sees the link.

SY
SysAdmin_Dave3/11/2026

We saw a precursor to this with automated trading bots, but this is different because it targets general user credentials. In our SOC, we're struggling because standard SIEM rules assume a human cadence. These agents operate at machine speed. We're currently discussing strict egress policies—basically treating AI browsers like unmanaged devices until vendors add 'human-in-the-loop' confirmations for high-risk actions like credential entry.

Verified Access Required

To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.

Request Access

Thread Stats

Created3/11/2026
Last Active3/11/2026
Replies2
Views129