Security researchers at Lares Labs have published a comprehensive analysis of the OWASP Agentic AI Top 10, documenting real-world incidents that demonstrate these threats are no longer theoretical. The report reveals that autonomous AI agents pursuing complex goals with minimal human intervention introduce unprecedented security vulnerabilities that traditional frameworks cannot address.

The top threats include Agent Goal Hijack, where external content redirects agent objectives through natural language attacks. Notable incidents include EchoLeak targeting Microsoft 365 Copilot, GitHub Copilot YOLO Mode enabling arbitrary shell commands, and AGENTS.MD Hijacking allowing data exfiltration during routine coding sessions. Tool Misuse ranks second, with attackers weaponizing agents access to email, databases, and code execution capabilities.

Identity and Privilege Abuse emerges as a critical concern, with compromised agents inheriting all access permissions including database credentials and cloud resources. Supply Chain Vulnerabilities affect dynamically loaded tools like MCP servers, with incidents such as the malicious postmark-mcp server secretly forwarding emails and the Shai-Hulud Worm compromising over 500 npm packages.

Memory and Context Poisoning represents a persistent threat unique to stateful AI agents. Google Gemini has experienced multiple incidents where hidden prompts implanted fake information permanently into user memory. The report also documents Cascading Failures in multi-agent systems, where a single compromised agent can poison downstream decision-making across entire workflows.

Security experts recommend treating agents as first-class identities with explicit permissions, implementing kill switches as non-negotiable mechanisms, and deploying continuous behavioral monitoring. Organizations should inventory all running agents, define explicit trust boundaries, and assume all external input is hostile.