The Agentic AI Governance Blind Spot: Why the Leading Frameworks Are Already Outdated

Approach any security, technology and business leader and they will stress the importance of governance to you. It’s a concept echoed across board conversations, among business and technology executives and of course within our own echo chamber of cybersecurity as well. For example, the U.S. Cybersecurity Information Security Agency (CISA) has a page dedicated to Cybersecurity Governance, which they define as:
“Cybersecurity governance is a comprehensive cybersecurity strategy that integrates with organizational operations and prevents the interruption of activities due to cyber threats or attacks.”
It’s also a concept stressed as it relates to AI, with countless publications, frameworks and industry talks stressing the need for AI governance. One of our industry’s most reputable and cited publications, the IBM Cost of a Data Breach Report from 2025 even had the subtitle “The AI Oversight Gap”. In that report, they explained that 97% of organizations reported an AI-related security incident and lacked proper AI access controls, and 63% of organizations lacked AI governance policies to both manage AI or prevent the proliferation of shadow AI.
The problem?
We’ve got a critical gap, and not just of AI Governance at a high-level, as IBM cites, but specifically related to Agentic AI, the aspect of AI that is poised to truly transform enterprise and digital business environments and where risks truly materialize through autonomy and action. We’re hearing it is the “decade of agents” and seeing widespread adoption and excitement around agentic AI, yet, the fundamental resources the industry uses and points to when it comes to AI Governance are completely void of any mention of agents, at all.
The three most cited resources in AI governance today, the NIST AI Risk Management Framework, the EU AI Act, and ISO 42001, share something in common beyond their influence.
None of them contain a single mention of agentic AI.
Enter any conversation about AI Governance and you’ll see people rattle off the typical frameworks such as NIST AI RMF, ISO 42001 and EU AI ACT. We’ve got these frameworks, standards and resources, surely we should be good to go, right?
As it turns out, not so much.

Do a quick Control + F to search for “agent”, “agentic” etc. and you find “0” results in all 3 of the most commonly cited frameworks, standards and regulatory requirements when it comes to AI.
Not one reference to autonomous agents. Not one mention of multi-agent systems. Not one acknowledgment of AI that doesn't just generate outputs but takes actions, makes decisions, and operates across enterprise systems with real-world consequences.
This isn't a minor editorial gap.
It's a structural failure in the governance landscape at the exact moment organizations need guidance the most.
There’s various reasons for this, such as the fact that several of these resources were written several years ago at this point, while others go through heavy handed bureaucratic processes to be updated or changed and the uncomfortable truth that they are often written by folks disconnected from the technical field or from the practitioner community as well.
So while enterprises are looking to address their AI governance gaps, develop coherent strategies to manage risks associated with AI and lay out strategic roadmaps and policies and processes to steer their organizations securely in the AI era, they’re often flying blind, being driven by publications that are entirely void of any mention of the most risky and critical aspect of modern AI, which is agents.
The World Moved, the Frameworks Didn’t

These publications were written in the age of AI that was model-centric. Often with a heavy focus on terms such as safety, alignment, and bias, or on potential risks such as direct prompt injection, data leakage and jailbreaking. These were and still are valid risks to be clear, as prompt injection still is an unsolved problem, organizations are rightly concerned about sensitive data being exposed and models still are routinely jailbroken.
The problem is that the risks posed by agents are exponentially larger from an attack surface and vector perspective and the ramifications are far more devastating as well. Agents don’t just take prompts and provide outputs, functioning as simple input/output machines but instead have autonomy, take actions, utilize tools and can present amplified risks.
This includes direct system and data access, external interactions, conducting irreversible transactions and the potential for cascading risks, going rogue as well as broader supply chain challenges with the introduction of plugins and skills, the latter of which now has burgeoning marketplaces around and complicates an already challenging software supply chain landscape.
This is a point I tried to make in a recent piece of mine titled “Securing AI Where it Acts: Why Agents Now Define AI Risk” at Cloud Security Alliance’s AI Summit.
Securing AI Where it Acts: Why Agents Now Define AI Risk | AI Summit Q1 2026
I’m not alone in noticing this gap either, as it is being called out by industry leading organizations such as the National Association of Corporate Directors (NACD) in a piece titled “Agentic AI: A Governance Wakeup Call”.
In this piece, NACD argues that regulatory compliance becomes more complex in a world where AI systems are taking thousands of actions daily without human review and the legacy compliance approaches of the past of periodic audits, approval workflows and after-the-fact reviews simply won’t cut it for systems making operational decisions in real-time. Ironically, these are points I’ve made myself as well, in an article titled “GRC is Ripe for a Revolution: A look at why Governance, Risk and Compliance Lives in the Dark Ages and How it Can Be Fixed”.
Why Agentic AI Changes the Risk Calculus
The AI landscape of 2026 and beyond looks nothing like the one these frameworks were built for. We've moved from a world of models producing text and images to a world of agents executing multi-step workflows, invoking tools, accessing sensitive data, making API calls, and chaining decisions together, often with minimal human oversight. The gap between "AI that suggests" and "AI that acts" is not incremental, it's a fundamental shift in the risk surface, and our governance infrastructure hasn't caught up.
Models on their own have a relatively bounded blast radius. For example, a model may hallucinate and a human may catch it before anything happens or the output is used or exposed in a way it shouldn’t be. A model may produce biased output and get reviewed and flagged. Humans could feasibly function as a chokepoint between AI output and real-world output.
That isn’t the case with agentic AI, which removes that chokepoint, and instead humans become the bottleneck as approaches such as Human-in-the-Loop (HITL) simply can’t scale at machine speed. Agents have the ability to browse the web, execute code, query databases, send emails or messages, modify digital infrastructure, and interact with other agents. This changes the risk profile of the situation categorically.
While we do have excellent resources such as the OWASP’s Agentic AI Top 10, which addresses many of these agent-specific risks, that just went live at the end of 2025 and is far from being baked into leading frameworks and standards such as ISO 42001, EU AI Act and NIST’s RMF AI. This problem isn’t entirely lost on these groups, with NIST’s Center for AI Standards and Innovations (CAISI) just releasing an RFI in January 2026 seeking insight related to secure development and deployment of agentic AI systems. That said, anyone who has been involved in these sorts of efforts knows they can take many months to materialize due to broad input, structured review processes and procedures that must be followed prior to publication.
During this gap, while enterprises are rapidly adopting agentic architectures and workflows and navigating this new era of risk and complexity, as security, legal and compliance leaders reach for their governance frameworks to understand how to manage these risks they will walk away bewildered and uninformed.
Flying Blind at the Worst Possible Time
The consequence of this scenario is that organizations are currently adopting agentic AI, even if they don’t realize it, through shadow adoption and bottoms up usage, as is common with emerging and innovative technologies, as evident from the IBM report I mentioned above.
We’re trying to govern the future with frameworks written for the past.
These legacy frameworks built around a model-centric approach don’t account for autonomous actions and agency. Data governance requirements don’t address the reality of agents dynamically accessing and combining multiple disparate data sources at runtime and transparency requirements designed for static systems don’t keep pace with agents whose behavior is emergent and tied to context.
Where We Go From Here
None of this was meant to diminish the amazing work of the professionals and practitioners who have spent countless hours creating these frameworks, resources and standards for the community, but it is an acknowledgement that they’re incomplete and missing the most critical aspect of AI risks in modern enterprise environments.
It’s common for standards bodies and compliance frameworks to take time to evolve and be a representation of the bolted on rather than built in paradigm that the broader security industry itself represents. That said, the shorter we can make that cycle between technological innovation and governance modernization provides us an opportunity to truly govern and mitigate risks, rather than just pretend we are.
However, until this happens, the burden ultimately falls on organizations to recognize these gaps and build their own agentic AI governance approaches, grounded in the foundational people, process and technology paradigm. This means developing governance controls focused on agent autonomy, agent-to-agent interaction, tool use, permission boundaries and runtime behavioral monitoring.
Waiting for these frameworks and standards to catch up isn’t a strategy, it is a liability.
The most dangerous thing in enterprise AI right now isn’t an ungoverned agent, it’s an organization that believes its agents are governed because it checks the boxes on frameworks that don’t even acknowledge their existence.
Related blog posts

GreyNoise Findings: What This Means for AI Security
GreyNoise Findings: What This Means for AI Security Late last week, GreyNoise published one of the clearest signals...

The CISO Checklist for the New AI Agent Reality
AI agents are now acting across SaaS, cloud, and endpoint environments with identities and permissions that traditional...

Demystifying AI Agent Security
Let me be the first to say it, this space - AI agent security and governance - can be confusing. When I joined...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo