
Conversations at RSA 2026 circled back to the same topic: identity is the foundation of AI agent security. While it’s understandable, it’s the wrong way to look at things. Identity tells you who showed up. It says nothing about whether what they did made sense.
The actual core of AI agent security is intent, understanding what an agent was supposed to do, and whether its runtime behavior was consistent with that purpose, and not just within a single prompt and action, but across multi-step workflows and even long-horizon tasks.
Here's the uncomfortable truth that the identity-focused solutions gloss over: authorization tells you what was allowed. It says nothing about whether what happened was appropriate.
That distinction isn't semantic. It's the difference between a security program and a false sense of one. Look at identity as a piece of the puzzle, not the answer.
The Authorization Trap
Consider a scenario that plays out more often than most security teams realize. An HR compensation agent receives the same prompt twice, "pull compensation data for annual review", from the same user, through the same access chain. Authorization clears both without hesitation, because technically, both are allowed.
The first run behaves exactly as designed, accessing a narrow slice of records and routing output to internal HR infrastructure in a way that is proportionate to the task and consistent with the agent's declared purpose.
The second run is a different story. Same permissions, but this time the agent pulls more records, including SSN and address fields it had no reason to touch, and routes the output to an external endpoint that has never appeared in this agent's traffic before.
Authorization said yes to both. The actual risk profile of those two runs couldn't be more different. This is the authorization trap. Your controls confirm that access was permitted, while the question that actually matters goes unanswered - was the activity appropriate, or malicious? Those are not the same question, and conflating them is how security teams end up with clean audit logs and a serious incident.
The real question isn't who was allowed in. It's whether the actual activity, in that moment, made any sense given everything we know about this agent's purpose, environment, intent, and behavior. That's a runtime context problem, and it requires a fundamentally different approach to solve it.
What "Identity" Actually Means for Agents (It's More Complex Than You Think)
The identity conversation at RSA largely revolved around a framing that's too narrow. Does this agent operate under a human identity, a service identity, or an application identity? The answer, practically speaking, is yes. And also, that's not the right question.
Agents don't have a single identity. They operate across a layered identity surface that most security tools weren't designed to reason about.
- Static identities are the service accounts, API keys, and OAuth credentials baked into the agent at build time.
- Dynamic identities in session are tokens minted at runtime, scopes that expand or contract mid-execution, credentials passed through orchestration chains.
- Identities in tools mean the agent doesn't just carry its own identity; it inherits the permissions of every MCP server, every API integration, every plugin it calls.
- Implicit identities through Agent-to-Agent (A2A) interaction raise a harder question. When your agent delegates to a sub-agent or invokes a third-party agent, whose identity governs what happens next? Whose permissions apply? Who's accountable?
Securing AI agents by locking down one identity layer while the others remain opaque is like putting a deadbolt on the front door and leaving the rest of the building open. It gives you a control point. It doesn't give you coverage.
Context Is Key
What actually enables appropriate decisions, not just permitted ones, is assembling the full picture of what an agent is, what it's doing, and what's around it at the moment of action. The problem is that identity alone, without the surrounding context, can’t tell you whether what happened was appropriate. That means bringing together signals that have historically lived in separate security domains:
- Identity (NHI/Identity Security): Which identity is this agent operating under, across all the layers described above? Is that consistent with its declared purpose? Has the identity surface changed since the last run?
- Data (DSPM): What data did the agent actually touch? Was that consistent with its scope? Did it access PII, financial records, or sensitive classifications it had no business reason to reach?
- Model behavior (AI Firewall / Model Security): Was there evidence of prompt injection or jailbreak activity in this session? Did the model's outputs suggest it had been manipulated mid-execution?
- Agent posture (Security Posture Management): What did the agent's code and configuration look like at runtime? Had it drifted from its last known-good state? Was it running a dependency with a known vulnerability?
- Environment (Cloud/Endpoint Security): What did the surrounding infrastructure look like? Were there anomalies in the environment the agent was operating in that would change the risk calculus?
None of these signals, in isolation, is sufficient. An agent can have clean identity hygiene and still exfiltrate data. It can pass posture checks and still be jailbroken. It can operate in a healthy environment and still exceed its declared purpose by an order of magnitude. The policy decision, whether this activity is appropriate, requires all of it together.
Why This Matters More As Agents Get More Autonomous
The urgency here scales with agentic complexity. A simple RAG pipeline with a narrow scope and human review at every step is a manageable risk surface. A multi-agent workflow where orchestrators delegate to sub-agents, those sub-agents invoke tools with their own identity contexts, and the whole chain operates over sensitive enterprise data without synchronous human oversight - that's a different category of problem entirely.
A2A patterns in particular deserve more attention than they're currently getting. When your internal agent calls an external agent to complete part of a task, you've introduced an identity boundary you may not fully control, a data flow you may not be able to observe, and a behavioral surface you certainly didn't test. The authorization layer may see a valid token exchange. The runtime context layer should be asking whether this delegation was consistent with the originating agent's declared purpose, and flagging when it isn't.
The Right Frame: Authorization Is Necessary. Context Is Sufficient.
This isn't an argument against identity security. NHI hygiene matters. Reducing the static credential surface matters. Governance over which agents can authenticate to which systems matters.
But identity security, on its own, answers the question of access. In a world where agents are executing multi-step workflows autonomously, touching sensitive data, and operating across trust boundaries that didn't exist three years ago, appropriateness is the harder and more important question.
The organizations that figure out how to answer it in runtime, not just at provisioning time, will be the ones with actual visibility into what their AI systems are doing. The rest will have authorization logs and unanswered questions.
See It in Practice
If this reframing resonates, there are two good next steps depending on where you are in the conversation.
If you want to see how Zenity assembles runtime context across identity, data, model behavior, posture, and environment in your own environment, book a demo.
If you want to go deeper on what the RSA conversations revealed and what the path forward actually looks like in enterprise deployments, watch our webinar From RSA to Reality: AI Agent Security in the Enterprise, where we unpack the week's themes and get concrete about what teams should be doing now.
Same prompt. Same agent. Different risk. The difference wasn't in the identity. It was in everything else.
All ArticlesRelated blog posts

The Floor Was Selling AI. The Hallways Were Asking for Help.
One man’s perspective on RSA 2026 and what the AI agent security market actually looks like up close. Every year...

Context Engineering Is Security Engineering. RSA 2026 Made the Case.
The Model Isn't the Problem Anymore Cisco polled its major enterprise customers before RSA 2026 and found something...

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up
Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference,...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo