
Key takeaways:
- AI agents are already part of daily work for large portions of the enterprise workforce, with adoption spreading across IT, security, customer service, engineering, and beyond.
- Shadow AI agents are emerging before governance is in place, creating accountability gaps that compound when incidents occur.
- Scope violations are a routine operational condition, not an edge case, and nearly half of organizations have already felt the consequences.
- When incidents occur, detection and response timelines are far longer than most security teams would expect or accept.
- Compliance frameworks are filling the governance gap, but the data reveals a significant gap between regulatory alignment and actual operational readiness.
For years, conversations about AI security risks were framed as forward-looking. Organizations were told to prepare for a future where autonomous agents would act on their behalf, access sensitive systems, and make consequential decisions without human intervention at every step. That future, it turns out, is now.
A new survey report published by the Cloud Security Alliance (CSA) in partnership with Zenity, Enterprise AI Security Starts With AI Agents, paints a detailed picture of where enterprise AI adoption actually stands in 2026.
The findings, drawn from 445 IT and security professionals across organizations of varying sizes and industries, tell a story that should give every security leader pause: AI agents are already embedded in core workflows, already exceeding intended permissions on a regular basis, and the governance and detection mechanisms needed to manage them are still catching up.
This isn’t a warning about what could happen. It’s a report on what is happening right now.
AI Agents are Part of Daily Work, at Scale
AI agent adoption in enterprises is more widespread than many security teams realize, with agents embedded in daily workflows across a variety of functions, including IT, engineering, customer service, operations, and executive teams.
The operational challenge lies not only in usage breadth, but in the deployment of multiple agentic platforms, each with its own permissions, configurations, and telemetry. The report details the resulting fragmentation and its impact on applying consistent security policy.
Shadow AI Agents are not a Future Problem
One of the more striking findings in the report is how early unsanctioned AI agents appear relative to the overall scale of deployment. Organizations do not have to reach some threshold of formal adoption before shadow AI becomes a concern. It’s already there, often in significant numbers, operating in environments where visibility and ownership structures are still being established.
The report breaks down exactly how many unsanctioned agents organizations are already dealing with, how that number scales with organization size, and what the ownership picture looks like across both sanctioned and unsanctioned. The accountability gaps the data reveals have direct implications for how incidents get investigated and contained, and how long they take to resolve.
Scope Violations are the New Normal
The report's findings on AI agent behavior are among its most important. The question of how often agents exceed their intended permissions, taking actions beyond what they were designed or authorized to do, is central to understanding the actual risk surface of agentic AI. And the answer, based on the survey data, is sobering.
For example, a procurement agent that was supposed to research vendor options sends an actual quote request. A support agent designed to retrieve information begins modifying records.
These are not hypothetical failure modes. They represent behaviors organizations are already regularly reporting.
The full report includes the specific frequency data, the breakdown of incident rates over the past 12 months, and the detection and response timelines organizations are working within when those incidents occur. The numbers on how long it takes to identify and contain an AI agent incident, and what is happening during that window, are particularly worth careful reading.
Compliance is Filling the Governance Gap and Showing its Limits
While most organizations have some AI governance in place, it is often incomplete and relies heavily on regulatory frameworks designed for non-autonomous systems.
HIPAA, the NIST AI Risk Management Framework, SOC 2, and other familiar compliance structures are shaping how organizations approach AI agent oversight. While there is logic in relying on established frameworks, the report data suggests these frameworks were not designed for the autonomy and complexity of modern AI agents. As a result, organizations may achieve compliance without actually securing their AI operations, exposing themselves to unforeseen risks.
The gap between what organizations are doing and what they feel ready for is one of the report's more telling findings. Download the report to see how that gap is measured, and what it looks like across different organization sizes and industries.
What the Data Means for Security Teams
Taken together, the findings point to a structural shift that security teams are already navigating, whether they have formally acknowledged it or not. AI agents are no longer peripheral tools. They are part of the core digital workforce, and they are generating a category of risk that existing controls, processes, and frameworks were not built to handle.
The report does not just surface the problem. It provides the data security leaders need to make the case internally for dedicated AI agent security investment, benchmark their organization's posture against peers, and identify the most critical gaps.
If your organization is deploying AI agents, or planning to, the CSA and Zenity report is required reading. The full data is inside.
All ArticlesRelated blog posts

The OWASP Top 10 for Agentic Applications: A Milestone for the Future of AI Security
The OWASP GenAI Security Project has officially released its Top 10 for Agentic Applications, the first industry-standard...

Inside the Agent Stack: Securing Agents in Amazon Bedrock AgentCore
In the first installment of our Inside the Agent Stack series, we examined the design and security posture of agents...

Inside the Agent Stack: Securing Azure AI Foundry-Built Agents
This blog kicks off our new series, Inside the Agent Stack, where we take you behind the scenes of today’s most...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo