Agentic AI Security Isn't Just A Technical Problem - It's a Strategic One

Portrait of Dina Durutlic
Dina Durutlic
Cover Image

If you’ve started exploring how to secure AI agents in your environment (or even just reading about it), you likely already know that it’s not as straightforward as applying traditional approaches and tools. AI agents aren’t just another workload or API to monitor, they’re dynamic, semi-autonomous/autonomous entities operating at the intersection of user intent, agent behavior, and enterprise systems.

And not all AI agents are created equal or secure.

Some agents are custom-built and deeply embedded into business processes. Others, like Microsoft 365 Copilot or Salesforce Einstein, are commercial off-the-shelf (COTS) solutions. Each demands a slightly different approach to security.

This nuance is exactly what makes agentic AI security so critical (though admittedly a bit cumbersome to understand). It’s not just about controlling access to systems; it’s about understanding what the agent can do, how it behaves, and how its actions might evolve over time. And to do that effectively security teams need to evaluate both build-time and runtime.

This is a theme echoed in Gartner’s recent research, How to Secure Custom-Built AI Agents. While their report focuses on securing homegrown agents, many principles (like access control, runtime defense, etc.) are just as critical to COTS AI agents in the enterprise.


Commercial Off-The-Shelf (COTS) AI Agents Are Not “Set and Forget”

When organizations adopt commercial AI agents like Microsoft 365 Copilot, it’s easy to assume they’re secure by default. After all, these are enterprise-ready tools from well-known vendors.

But enterprise-ready doesn’t mean risk-free.

These agents operate with dynamic inputs, broad access to enterprise data, and the ability to act on behalf of users. Many are tightly integrated with messaging apps, files, and business workflows making it harder to monitor exactly what they’re doing and when. They aren’t static systems; they’re active participants in your environment.

That’s why COTS agents still require enterprise-side controls especially around who can use them, what they can access, and how they behave in real-world use. Even if you can’t modify their core logic, you can and should configure clear access boundaries, establish prompt usage policies, and ensure visibility into agent activity.


Custom AI Agents Demand Full Lifecycle Security

Custom-built AI agents (e.g. like those developed with Microsoft Copilot Studio, etc.) introduce a different level of complexity and risk. These agents are deeply embedded into business logic, trained on enterprise-specific context, and capable of executing autonomous actions.

Here, the organization owns both the design and the deployment, making it essential to address security at both build-time and runtime.

This is where buildtime security is crucial and allows security teams to:

  • Define enforceable policies around identity scope, data access, and prompt hygiene
  • Govern how agents are constructed, integrated, and deployed
  • Ensure configurations align with internal risk and compliance frameworks

Now enter runtime security.

In order to truly have solid AI agent security in place, you need to address the dynamic nature of AI agents with runtime visibility and enforcement. Inspection and enforcement ensure AI agents don’t just behave securely in theory, but in practice. This includes detecting and responding to prompt injection attempts (both direct and indirect), privilege escalations, hidden instructions, and least privilege violations. When runtime insights are mapped across the attack chain, from recon to exfiltration, security teams can pinpoint threats whether they stem from external attackers, trusted insiders, or an overly curious AI agent.


In short, visibility without action isn’t enough and action without proper guardrails leaves potential gaps. Together, buildtime and runtime security provide the full picture. Buildtime sets safe defaults; runtime ensures those defaults hold up under real-world usage.

Why “Agency” Should be a Security Priority

One of the trickiest aspects of securing AI agents is understanding their agency, their ability to make decisions and take action. Unlike static apps, AI agents can interpret instructions, reference previous interactions, and even execute on processes with multiple steps. That opens the door to new risks, especially if they have been targeted by an attacker.

In addition to the buildtime and runtime security measures, security teams need the ability to profile each AI agent, understanding its identity, access patterns, data exposure, connections, and behavior baseline. This type of profiling enables anomaly detection that’s deeply contextual, helping distinguish between normal automation and risky deviations.

Don’t Wait for AI Agents to Backfire

AI agents are already operating inside of your environment - most likely anyway. They’re not waiting for your security strategy to catch up.

But the good news is that you don’t need to boil the ocean. Start with visibility. Get clarity on who’s using agents, how they are being triggered, and what permissions they have. From here, build a strategy that brings together buildtime and runtime security.

All Articles

Related blog posts

Secure Your Agents

We’d love to chat with you about how your team can secure
and govern AI Agents everywhere.

Book Demo