Logo
FrontierNews.ai

The Identity Crisis Threatening Your AI Agents: Why Security Experts Are Sounding the Alarm

AI agents are operating with increasing autonomy across enterprise systems, but organizations lack reliable ways to verify their identities and control their actions. The Coalition for Secure AI (CoSAI), a global multi-stakeholder initiative, has released new research addressing a critical gap: how to assign, verify, and govern the identities of autonomous AI agents before they become a widespread security liability.

The urgency is real. At the RSA Conference 2026, standing-room sessions revealed that enterprise security perimeters have fundamentally shifted. Traditional defenses designed for human users no longer work when autonomous agents can act, spend money, and share data on a company's behalf. This isn't a distant threat; it's happening now, and most organizations aren't prepared.

Why Identity Is No Longer a Solved Problem for AI Agents?

For decades, identity and access management has been a cornerstone of enterprise security. Organizations know how to verify that a human employee is who they claim to be, and they can limit what that person can access. But AI agents operate differently. They make decisions at machine speed, can spawn additional agents, and operate across systems in ways that traditional identity frameworks were never designed to handle.

The problem runs deeper than simply assigning credentials. As agents become more autonomous and context-aware, the question becomes: how do you verify not just that an agent is legitimate, but that it's actually trying to do what it claims? This gap between valid identity and trustworthy intent is what security experts are now calling the "intent-based authorization" problem.

"Organizations are rapidly deploying AI agents, and identity and access control models need to keep pace. At the same time, valid identity alone is insufficient; credentials can be correct while outcomes are still harmful," said Ian Molloy, Workstream co-lead at IBM.

Ian Molloy, Workstream co-lead, IBM

This distinction matters enormously. An agent might have legitimate credentials to access a database, but if it's been compromised or is operating under a malicious instruction, those correct credentials become a liability rather than a safeguard.

How to Secure AI Agents in Your Enterprise

CoSAI's new Agentic Identity and Access Management framework provides practical guidance that organizations can implement using existing security infrastructure. The approach centers on three core principles:

  • Unique Credentials: Assign distinct, machine-readable identities to every autonomous agent operating in your environment, similar to how you would provision access for human employees.
  • Least-Privilege Access: Limit each agent's access to only the specific resources and actions required for its assigned tasks, reducing the blast radius if an agent is compromised.
  • Continuous Visibility and Delegation Tracking: Maintain clear logs of who or what is taking action across systems, including how permissions were delegated and by whom, enabling rapid detection of unauthorized behavior.

The framework is deliberately designed to work with identity systems organizations already use. Rather than requiring a complete security overhaul, CoSAI's guidance shows how to extend existing identity and access management solutions to safely support autonomous AI.

What Happens When Agents Act Without Proper Identity Controls?

The risks extend beyond traditional data breaches. CoSAI's research identifies a phenomenon called the "semantic mosaic effect," where agents can synthesize and expose sensitive insights from seemingly innocuous sources without ever triggering conventional leak protection systems. An agent might access multiple databases that individually contain harmless information, but when combined, reveal proprietary trade secrets or personal data.

Additionally, threats such as backdoored coding assistants and malicious model artifacts are eroding traditional security boundaries. These aren't hypothetical scenarios; they're emerging attack vectors that organizations are already encountering in production environments.

CoSAI's research also highlights the emerging risks within the Model Context Protocol (MCP), a standard that allows AI agents to interact with external tools and data sources. The protocol layer itself has become a critical attack surface, with threats ranging from identity misuse and context tampering to supply chain compromise.

"Forty-plus organizations, including direct competitors, are collaborating inside CoSAI because we understand that the threat landscape doesn't respect company boundaries. Neither can our defenses," stated J.R. Rao, IBM Fellow and CTO of Security Research at IBM.

J.R. Rao, IBM Fellow and CTO, Security Research at IBM

This collaboration reflects a broader recognition: agentic AI security is not a competitive advantage to hoard. It's a shared infrastructure challenge that requires industry-wide standards and transparency.

The Window to Act Is Narrowing

CoSAI's latest research, "The Future of Agentic Security: From Chatbots to Autonomous Swarms," emphasizes that the time to build proper security infrastructure is running out. As organizations move beyond simple AI assistants toward fully autonomous, multi-agent systems capable of independent action across sensitive infrastructure, traditional security controls are struggling to keep pace.

The research identifies two unsolved problems that incremental improvements to existing security tools won't solve. First, there's the inability to reliably evaluate and govern what an AI agent is actually trying to accomplish when instructions are given in natural language. Second, there's the semantic mosaic effect mentioned earlier, where agents can piece together sensitive information without triggering alarms.

To address these gaps, CoSAI outlines a framework for secure agentic architecture that includes ephemeral environments (temporary, isolated spaces where agents operate), dynamic credentialing (credentials that change based on context and time), and a new category of defense called Agent Detection and Response (ADR), similar to how organizations use endpoint detection and response for human users.

The core message for executives is straightforward: the window to implement the right security infrastructure before widespread agentic deployment is narrowing, and the clock is already ticking. Organizations that wait until agents become ubiquitous will find themselves retrofitting security into systems that were never designed with it in mind.

For security leaders and CISOs, the path forward is clear. Start with identity. Extend existing frameworks. Implement zero-trust authentication approaches. And recognize that as agents become an operating layer of the enterprise, security principles designed for humans must evolve to govern machines operating at machine speed.