The Cloud Security Alliance has published guidance identifying the fundamental question that security teams must address before any AI project proceeds: What data will this system access and what actions can it take? According to CSA researchers, this seemingly simple question reveals the true attack surface and risk profile that organizations must secure.

The guidance emphasizes that AI systems differ fundamentally from traditional applications because they combine data access with autonomous decision-making capabilities. A customer service chatbot that only retrieves information presents different risks than an AI agent that can modify records, send emails, or execute transactions. Security teams must map these capabilities before deployment.

CSA recommends creating a comprehensive inventory of all data sources the AI system will access, including databases, APIs, file systems, and external services. Teams should document whether access is read-only or includes write capabilities, and identify the sensitivity classification of each data source. This inventory becomes the foundation for access control policies and monitoring strategies.

The action capability assessment should enumerate all operations the AI can perform, from benign activities like generating responses to consequential actions like processing payments or modifying configurations. Each capability requires explicit authorization controls and audit logging. Organizations should implement the principle of least privilege, granting only the minimum permissions necessary for intended functionality.

Security leaders note that many AI deployments fail to conduct this basic assessment, resulting in over-privileged systems that present unnecessary risk. The CSA guidance concludes that organizations answering this question thoroughly before deployment will identify potential security issues early when remediation costs are lowest and design choices remain flexible.