Trail of Bits researchers have demonstrated that inadequate isolation mechanisms in agentic browsers enable attacks functionally similar to cross-site scripting and cross-site request forgery, resurfacing vulnerability patterns the web security community spent decades building defenses against. The research reveals fundamental architectural weaknesses in how AI agents interact with web content.

The research identifies four primary trust zones in agentic browsers: the chat context containing the agent loop and conversation history, third-party LLM servers where user data leaves user control, individual website origins with independent user data, and the external network including attacker-controlled sites. Current implementations fail to maintain adequate boundaries between these zones.

Trail of Bits demonstrated attacks ranging from subtle misinformation dissemination to complete data exfiltration and session compromise through relatively straightforward prompt injection techniques. The fundamental problem stems from LLM inability to reliably distinguish between data and instructions, combined with powerful tools that cross trust boundaries without adequate isolation.

The research shows that many users implicitly trust browsers with their most sensitive data including banking credentials, healthcare portals, and social media accounts. Rapid integration of AI agents into browser environments grants these agents the same access to user data and credentials, creating ideal conditions for exploitation when isolation fails.

Trail of Bits recommends that developers of agentic browsers extend the Same-Origin Policy to AI agents, building on proven principles that successfully secured the traditional web. Specific affected products were not named as vendors declined coordinated disclosure, but researchers note these architectural flaws affect agentic browsers broadly across the industry.