GitGuardian researchers have published analysis examining how the rapid proliferation of AI agents illuminates fundamental weaknesses in how organizations manage non-human identities. The research argues that challenges securing AI agents mirror longstanding problems with service accounts, API keys, and machine identities that enterprises have struggled to address for decades.

AI agents require credentials to access resources, make API calls, and interact with external services, creating the same identity lifecycle challenges that plague traditional non-human identities. Organizations deploying AI agents must track which credentials each agent possesses, what resources those credentials can access, and how to revoke access when agents are deprecated or compromised.

GitGuardian analysis reveals that many organizations lack visibility into AI agent credential usage. Agents may be provisioned with overly permissive credentials that grant access far beyond operational requirements, credentials may persist in configuration files and environment variables long after agents are decommissioned, and secret rotation often fails to account for AI agent dependencies.

The research identifies lessons that AI agent security can teach broader NHI governance including the need for just-in-time credential provisioning, mandatory credential rotation policies, comprehensive audit logging of credential usage, and automated detection of credential sprawl. AI agents make these requirements more urgent because their autonomous operation can rapidly multiply the impact of credential compromise.

Organizations should treat AI agents as first-class non-human identities requiring the same governance rigor as service accounts and API integrations. GitGuardian recommends implementing centralized secret management, deploying credential scanning to identify exposed secrets, and establishing clear ownership and lifecycle policies for all AI agent credentials.