Securing Homegrown Agents in Runtime: The Value of Zenity + Microsoft Foundry

Portrait of Andrew Silberman
Andrew Silberman
Cover Image

Key Takeaways

  • Next phase of the partnership between Zenity and Microsoft: Zenity and Microsoft Foundry are expanding their integration to deliver production-grade runtime security for enterprises building homegrown AI agents.
  • Data risks when adopting agents are vast: Agents accessing enterprise systems can expose or move sensitive data across contexts.
  • Prompt injection risks are non-deterministic: Attackers can manipulate agents through malicious prompts or context poisoning and perform destructive actions.
  • Tool and credential risks are sprawling: Agents invoking tools and APIs can lead to unauthorized actions or secret exposure.

How the integration works: Zenity integrates with the Foundry control plane to inspect agent behavior and enforce security policies inline at runtime.

Over the past year, Microsoft Foundry has emerged as a cornerstone for enterprises building and deploying homegrown agents at scale. Organizations across industries are using Foundry to move beyond experimentation and into production, creating AI agents that can reason, invoke tools, access enterprise data, and automate complex workflows.

As AI adoption accelerates, a new reality is setting in: once AI systems gain agency, the security and governance model must evolve with them.

In November, Zenity announced an integration with Foundry that introduced inline prevention, proving that it’s possible to block risky agent behavior in real time. Since then, customer demand, real-world usage, and deeper collaboration with Microsoft have pushed this partnership forward.

Today, Zenity and Microsoft are expanding that vision into a production-grade security model for Foundry agents, enabling enterprises to safely scale while protecting them from runtime threats.

From Experimentation to Production: How Customers Are Using Microsoft Foundry

Customers use Foundry to build agents that connect to the systems that run the business, performing things like:

  • Retrieve and summarize data from SharePoint and OneDrive
  • Automate ticket triage, remediation, and access workflows
  • Chain agents with internal APIs and SaaS platforms

In these environments, Foundry provides what customers need to build and scale, which is a robust platform with baked-in identity and access integration that has developer tooling for agent creation and orchestration. What customers are finding, however, is that risks appear and evolve at runtime.

What’s Changed Since November

The November announcement focused on one critical question: Can security controls be enforced in the moment, as agents invoke tools?

Initial adopters of the integration have been able to perform:

  • Inline blocking of prompt injection attempts
  • Prevention of unsafe outputs
  • Enforcement of basic policy violations
  • Protection along specific invocation paths

Since then, enterprise adoption of Foundry has expanded even more, and so has the threat surface. With this expanded partnership, Zenity’s capabilities evolve in meaningful ways:

  • From point prevention to end-to-end, agent-aware protection
  • From selective inline checks to broad runtime enforcement
  • From answering “can we block this?” to “how do we safely run agentic AI in production?”

This partnership is all about operationalizing security for agents at enterprise scale.

The Core Problem: Risk Emerges When Agents Act

Risk emerges when agents chain actions, invoke tools, and touch real systems. Legacy security controls, which are designed for static applications, APIs, or post-hoc monitoring, are simply not built for this kind of autonomous, adaptive behavior.

Frank Dickson, IDC, adds: “Agentic AI introduces a new class of nonhuman identities that must be authenticated, authorized, and governed. As agents retrieve data, invoke tools, and take action across systems, security must evolve as well. Connectivity standards such as MCP enable integration, but organizations also need inline protections that are native to agentic AI so activity can be validated, monitored, and controlled as it occurs within enterprise workflows.”

Security teams need a way to detect, disrupt, and prevent threats as they happen, not after damage has already occurred.

Zenity integrates where agents actually act, providing continuous, agent-aware, natively inline protection through its expanded partnership with Microsoft Foundry.

Expanded Inline Use Cases: Protecting Agents at Runtime

As agents interact with enterprise data, tools, and systems in real time, security controls must operate at the same moment agents act, evaluating context, behavior, and intent before risky actions can execute. Zenity and Foundry integrate together to help manage several key categories of risk.

Inline Prevention of Data Leakage

Foundry agents frequently connect to SharePoint, OneDrive, databases, and internal APIs. While this enables powerful automation, it also introduces risk.

Zenity prevents data leakage by tracing how data flows through each agent to identify and stop destructive or exfiltrating actions before they execute. The platform does this by:

  • Inspecting agent actions before data leaves trusted boundaries
  • Blocking sensitive data exfiltration
  • Preventing unsafe cross-context data sharing
  • Enforcing policies tied to identity and data sensitivity

For example, once untrusted data is in context, Zenity can remove the ability for a bad actor to perform destructive operations (e.g. destroyed production databases, wiping data) if the data is marked as tainted, as the platform will restrict agent action based on what is seen within its context window. This ensures that agents don’t accidentally or maliciously send data where it doesn’t belong.

Inline Protection Against Prompt Injection and Agent Hijacking

Attackers increasingly target agents through prompts designed to bypass guardrails, override system instructions, or trigger unintended behavior.

Zenity spots and disrupts these attacks by:

  • Detecting anomalous agent behavior across action chains
  • Preventing tool-mediated prompt injection
  • Identifying context poisoning and memory manipulation
  • Blocking jailbreak and hijack attempts in real time

This moves protection beyond static prompt validation and into behavioral enforcement at runtime.

Inline Control of Tool Invocation

Agents are only useful if they can invoke tools, but tools are also one of the most attractive attack surfaces. To help keep tool invocation in line with corporate policy, Zenity can enforce which tools agents can invoke, under what conditions, and with what data. This prevents:

  • Destructive or disruptive actions
  • Unauthorized API calls
  • Over-privileged behavior

Least privilege is enforced dynamically, based on context and policy, not just configuration.

Secret and Credential Exposure Prevention

Agents often require credentials, tokens, or API keys to perform their duties. These secrets are common weak points, even in environments with strong identity controls.

Zenity prevents:

  • Accidental leakage of secrets through prompts or outputs
  • Malicious extraction of credentials via agent manipulation
  • Exposure of authentication data across agent workflows

This closes a critical gap that traditional NHI and identity tools don’t cover at the agent layer.

Why This Matters: Differentiation at the Agent Layer

Microsoft provides customers with the powerful platform, and Zenity adds in the agent-aware security, runtime enforcement, inline prevention, and cross-agent visibility needed to adopt agents securely.

Together, Zenity and Microsoft enable enterprises to move faster, not by accepting more risk, but by controlling it where it actually emerges.

As AI agents become embedded in enterprise operations, security and governance must move from static controls to continuous, runtime protection. The expanded Zenity + Microsoft Azure AI Foundry partnership reflects this shift, helping customers scale agentic AI with confidence. Learn more here.

All Articles

Secure Your Agents

We’d love to chat with you about how your team can secure and govern AI Agents everywhere.

Get a Demo