Microsoft has published comprehensive guidance through the Cloud Adoption Framework for organizations building secure AI agent processes across their enterprise. The framework addresses governance and structural decisions required to support development teams deploying AI agents while maintaining security and compliance requirements.

The guidance establishes that all agents must meet baseline security requirements before deployment. AI agents process natural language, interact with external sources, and make autonomous decisions, introducing security risks including data leakage, data poisoning, jailbreak attempts, and credential theft. Organizations must integrate agent security into existing enterprise security frameworks.

Microsoft Azure AI Foundry introduces industry-first agent controls with comprehensive built-in security capabilities. Every agent created receives a unique Entra Agent ID providing visibility into all active agents across a tenant and reducing shadow agent proliferation. The platform includes cross-prompt injection classifiers scanning prompts, tool responses, email triggers, and untrusted sources.

Security controls embedded throughout the agent lifecycle prevent harmful outputs, protect against attacks, and ensure compliance. Guardrails operate at multiple intervention points rather than single checkpoints, creating defense in depth. The framework recommends deploying dedicated AI red teaming that tests for prompt injection vulnerabilities and validates guardrails against adversarial inputs.

The guidance outlines standardized processes teams must follow to build agents consistently. Infrastructure changes ensure new Azure services no longer default to direct internet connectivity, requiring explicit routing through secure services like Azure Firewall. Organizations can leverage GitHub Advanced Security and Microsoft Defender integration to improve collaboration between security and development teams across the full application lifecycle.