
Gartner published the inaugural Hype Cycle for Agentic AI last week (and yes, we’re included in two subcategories - Agentic AI Security and Guardian Agent). A few things worth noting.
It's inaugural, Gartner publishes over 130 Hype Cycles a year, and standing up a new one signals that a space has earned its own map. And it dropped in April, months ahead of the June - August window when these things usually appear. The fact that it arrived “ahead of schedule” tells you something: agentic AI isn't moving on anyone's schedule.
The question worth asking isn't whether agentic AI security is needed, that debate is over. We know AI agent security is needed. Now, it’s up to security leaders to decide whether they build a secure agentic AI infrastructure, or if they build an agentic AI ecosystem open to risk. The question is whether the right answer gets built before the wrong one becomes the default.
The Agent Was Always the Risk
A few years ago, the industry conversation about AI security was almost entirely about the model layer, prompt injection, training data, hallucinations, and model safety. These are real concerns, but they're not where enterprises get hurt.
Enterprises get hurt when an agent connected to their ERP, their email, their customer data takes an action it shouldn't. When it gets manipulated into doing something the business never authorized. When it operates at machine speed across systems that took decades to build, and nobody's watching.
Here's what that actually looks like:
- A sales rep asks their AI assistant a routine question, like, "What are my latest prospect engagements?"
- The assistant, connected to the company's CRM, scans the records, hits a specially crafted entry planted by an attacker, and proceeds to replace customer email addresses across the entire database with an attacker-controlled domain.
- Silently, automatically, the sales rep never saw it happen.
That's not a model safety problem. That's not a training data problem. That's an agent problem. The thing that perceives, decides, and acts on behalf of a user, connected to all systems and tools “at their fingertips”.
Agents Need Boundaries, Not Guardrails
The industry keeps reaching for one word: guardrails. It sounds like a boundary, but it isn’t.
A guardrail is a statistical model trained to identify and block certain behaviors. It works most of the time. Attackers don't care about most of the time. We've demonstrated this at Black Hat and RSAC, bypassing guardrails across Microsoft Copilot, Google Gemini, Salesforce Einstein, ChatGPT, and Cursor, not with novel exploits but with basic manipulation. We reframed a request to steal secrets as a "treasure hunt" looking for "apples." Cursor complied. The guardrail never fired.
What agents actually need are enforced boundaries:deterministic, contextually intelligent limits on what an agent can do, operating outside the agent's own reasoning loop. Not asking the model whether it's about to do something bad. Making certain outcomes structurally impossible regardless of what the model decides.
But here's the nuance the market keeps missing: an enforced boundary without context is just a blunt block. To make the right enforcement decision at the right moment, you need to understand who is acting, what they're doing, what data is involved, where it's going, and (most critically) why. Intent is the hardest signal to infer and the most important one. It's what separates a legitimate agent workflow from a manipulated one that looks identical on the surface.
This is the actual engineering problem. Not "how do we add guardrails" but "how do we build a policy engine that understands enough context to make a real enforcement decision before the action executes." That engine has to work at runtime, inline, and across every deployment pattern. And it has to be informed by what's happening at build time too, because the decisions made when an agent is configured determine the attack surface it carries into production.
The concepts of taint analysis, formal verification, and least-privilege enforcement have existed in software security for decades. They work, but the challenge is applying them to systems that are non-deterministic by design and that operate across organizational boundaries at a speed that makes human review impractical. That's the actual frontier.
The Stages Are Right, the Stack Isn’t
You don't need to throw out how you think about security to secure agents. Govern, Identify, Protect, Detect, Respond,those stages still apply. What doesn't apply is the legacy tooling built to execute against them, because that tooling was designed for a fundamentally different threat model.
When an agent modifies a CRM record, sends a Slack message, and calls an external API as part of a single autonomous workflow, a SIEM sees three unrelated events. A CASB sees a data transfer. A DLP tool might flag the output. None of them see the chain of reasoning, the tool invocations, the delegated identity, or the intent behind the sequence, which is precisely where the attack lives. The security team is left with fragments and no way to connect them.
This is the same pattern that played out with every prior wave of technological expansion. The internet needed network security built for network-native threats. The endpoint era needed endpoint security built for endpoint-native threats. Cloud needed cloud security built for the specific way cloud environments are constructed, misconfigured, and attacked. Each time, the incumbents tried to stretch existing tools to fit the new surface. Each time, purpose-built won. Not because the underlying security thinking changed, but because the new systems had fundamentally different properties that existing tools couldn't see.
Where This Goes
There are two things I believe are inevitable.
The first is that human oversight of agents doesn't scale. Most enterprises think about agentic AI security as a human problem - someone reviews the logs, someone approves the actions, someone watches for anomalies. That works when you have ten agents. It breaks down when there are hundreds, thousands.
Securing agents everywhere means the security itself has to become agentic. Guardian agents (autonomous systems that monitor, reason over, and respond to agent behavior in real time) aren't a product category on a Gartner chart. They're the logical conclusion of taking agent security seriously at scale. You can’t hire your way out of this problem.
The second is that the industry needs to bring DFIR-grade rigor to agentic incidents. When an endpoint gets compromised, security teams have a playbook: forensics, chain-of-custody, root cause analysis, session reconstruction. Today, when an agent gets manipulated, most organizations don’t have that playbook ready. They don’t have full traceability of what the agent reasoned, what it decided, what it acted on. No session replay. No risk-scored evidence chain that holds up to scrutiny.
That has to change, not because it's a nice capability to have, but because without it you can’t investigate, attribute, or improve. An agentic incident should be as reconstructible as an endpoint compromise. Right now, it isn't even close.
It’s no secret that the market is moving fast. The Gartner report is a signal, not a finish line. The organizations that treat it as a finish line (that check the agentic AI security box and move on) are the ones that will be explaining an incident they couldn't see coming and couldn't reconstruct after the fact.
Don't wait for a security incident to occur to discover what your tools can't see. Connect with Zenity today to see how you can prevent threats and manage risk proactively.
All ArticlesRelated blog posts

Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table
The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now,...

Securing AI Where It Acts: Why Agents Now Define AI Risks
AI agent security risks are emerging as a critical challenge in enterprise AI adoption. As agents move beyond generating...

From Policy Planning to Agentic Action: Providing an Execution Roadmap for the President’s Agentic AI Security Priorities
On March 6, 2026, the White House released its National Cybersecurity Strategy. While the document is relatively...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo