Securing AI Where It Acts: Why Agents Now Define AI Risk

In the first round of the AI gold rush, most conversations about AI security centered on models: large language models, training data, hallucinations, and prompt safety. That focus made sense when AI was largely confined to generating text, images, or recommendations. But that era is already giving way to something far more consequential.
AI agents are quickly becoming the primary way AI shows up inside organizations. As organizations look at the potential use cases for agents, it's vast and diverse. They may reset passwords, route incidents, process disputes, provision access, summarize and act on emails, browse the web, and chain together complex workflows with little or no human intervention. The ecosystem is also widely exploring their potential within security itself, with use cases across categories such as SecOps, AppSec and GRC among others.
This shift fundamentally changes the security equation. And it’s why, at the Cloud Security Alliance’s upcoming AI Summit, I am excited to deliver a keynote titled “Securing AI Where It Acts: Why Agents Now Define AI Risk.”
The core premise is simple but critical: AI agents are the operational expression of AI itself. If models are intelligence, agents are execution. And execution is where risk becomes real.
The Rapid Rise of AI Agents
Agent adoption is accelerating faster than most governance and security programs can track and we run the risk of perpetuating the age-old “bolted on rather than built in” security paradigm. Enterprises today routinely operate multiple agentic platforms in parallel, including agents built on common SaaS platforms like Microsoft Copilot Studio, Salesforce Einstein, and ChatGPT Enterprise, on cloud platforms like AWS Bedrock AgentCore and Microsoft Foundry, and even using browser-based agents like ChatGPT Atlas and Perplexity Comet as well as the widely popular agentic coding tools that run on developer endpoints.
What’s driving this growth is clear: agents reduce manual work, speed decisions, and connect AI directly to business outcomes. They don’t just answer questions, they do the work, think of them like giving LLMs arms and legs and making them actionable.
That’s also why agents are now the most important AI security concern. When AI systems can act across systems, identities, and data, small failures can cascade quickly.
Why Agents Are Fundamentally Distinct
To understand why agents define AI risk, it helps to be precise about what they are, and what they are not.
First, the nots.
Agents are not Chatbots. Chatbots respond to user inputs and stop whereas agents continue acting after the initial request, often across multiple steps and systems, and often offering dynamic solutions to incoming prompts.
Agents are not RPA. RPA workflows follow deterministic scripts, while agents are non-deterministic, adapting decisions based on context, memory, and tool responses.
Agents are not Traditional Apps. Apps execute predefined logic while agents reason, choose paths dynamically, and invoke tools semi- or fully-autonomously.
Agents are (an extension of but) not models. Models generate outputs while agents combine models with identity, tools, data, memory, and autonomy.
This combination is what makes agents powerful, but also what makes them risky. They operate in environments that were never designed for autonomous decision-makers.
Why Agents Now Define AI Risk
Security risk emerges not when a model generates text, but when an agent:
- Accesses sensitive data
- Invokes privileged tools
- Chains actions across systems
- Makes impactful decisions without human review
- Operates continuously in production
Agents are the defining layer of AI security because agents are where AI crosses from abstraction into consequence. This fact is also why traditional controls struggle. Static reviews, design-time policies, and post-incident alerts were never meant to govern autonomous, adaptive systems, and this paradigm shift makes runtime visibility and context in the agentic paradigm paramount
The Most Common Risks Practitioners Are Seeing
While I’m excited to dig deeper next week at the AI Summit, several patterns are consistently emerging across agent deployments:
- Prompt injection and indirect manipulation
- Unsafe or unintended tool invocation
- Sensitive data leakage across contexts
- Over-privileged agent access and excessive autonomy
- Memory poisoning and context drift
- Human-in-the-loop (HITL) bottlenecks
- Lack of visibility into what agents actually did
These risks show up precisely because agents operate at runtime, where traditional controls lose effectiveness.
How the Industry Is Responding: Standards and Frameworks Catch Up
The broader security ecosystem is beginning to reflect this shift.
- OWASP has expanded its focus from LLM risks to include the Top 10 for Agentic Applications, recognizing that agent behavior introduces distinct threat classes.
- MITRE ATLAS has added agent-specific techniques covering tool abuse, credential harvesting, data poisoning, and agent hijacking.
- NIST, through the AI RMF and related guidance, increasingly emphasizes lifecycle risk, autonomy, and real-world impact.
These efforts all point to the same conclusion: agentic security is AI security.
Why “Securing AI” Means Securing Agents
A central theme of the talk, and of this moment in AI security, is that agents are what define AI. A simple way that I’ve come to understand this space is that models enable intelligence and agents operationalize and humanize it.
If security programs stop at model evaluation, they miss where AI actually interacts with the enterprise and where risks truly materialize.
Join the Conversation at CSA’s AI Summit
At the Cloud Security Alliance AI Summit, I’m excited to expand on these ideas, share real-world observations, and provide practical guidance for security and governance teams navigating the agent era.
Attendees will leave with:
- A clear mental model for agentic AI risk
- A framework for distinguishing agents from prior automation
- Insight into how standards bodies are evolving
- Practical steps to mature AI security programs
If your organization is deploying AI agents, or plans to, you won’t want to miss it.
👉 Register for the session and join the discussion at the CSA AI Summit.
All ArticlesRelated blog posts

Advancing AI Security: Zenity’s Contributions to MITRE ATLAS’ First 2026 Update
MITRE ATLAS has become a critical resource for cybersecurity leaders navigating the rapidly evolving world of AI-enabled...

The Genesis Mission: A New Era of AI-Accelerated Science and a New Security Imperative
Innovation has always been the engine of American advancement. With the launch of the Genesis Mission, the White...

Considerations for Microsoft Copilot Studio vs. Foundry in Financial Services
Financial services organizations are increasingly turning to AI agents to drive productivity, automate workflows,...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo