How Zenity Helps Enterprises Apply AI TRiSM to AI Agents

Portrait of Dina Durutlic
Dina Durutlic
Cover Image

The future isn’t human vs machine, it’s human trying to govern machines. As AI agents grow more autonomous (like replying to emails, writing code, granting access, making decisions, etc.) the real threat isn’t a malicious model. It’s the absence of controls. AI Agents don’t come with built-in security policies. They don’t ask for permission. They simply do what they’re told (sometimes correctly, sometimes dangerously) because no guardrails told them otherwise.

So where do you start?

If reading that gives you slight panic (or has already kept you up at night), you’re not alone. Many security teams see the upside of AI but feel unprepared to secure it, fewer know where to begin. Because it’s not just about threat detection. It’s about visibility, accountability, and enforcing policies before something unexpected happens.

That’s where Gartner’s AI TRiSM (Trust, Risk, and Security Management) framework comes in. While frameworks exist to help organizations manage AI, most focus on model safety, ethical principles, or GenAI usage guidelines. AI TRiSM, by contrast, offers a practical guide for managing the associated risks across five layers. It connects people and process requirements with actionable technical controls - especially across the top two layers, where AI governance and runtime enforcement come into play. That’s exactly where Zenity specializes. Security leaders are already putting it into practice, especially as GenAI shifts from experimentation to enterprise usage.

Zenity is purpose-built to address the top of this stack - the emerging layers of AI Governance and AI Runtime inspection & Enforcement. While traditional tools are still important for your broader security strategy, they aren’t designed to handle the dynamic nature of AI agents. Zenity is. Let’s break down what that looks like.

Top Layer: AI Governance - Security Starts With Visibility and Control

AI Governance isn’t a checkbox, it’s a living system of policies, configurations, and ongoing assessments that dictate how AI is adopted, managed, and monitored. Gartner defines AI governance capabilities as including cataloging AI assets, mapping data lineage, approving and attesting usage, and continuously evaluating posture and compliance.

Zenity’s AI Security Posture Management module addresses this layer head-on. The platform discovers and continuously takes inventory of AI agents across the enterprise. These agents are scored for risk, evaluated for misconfigurations, and mapped to organizational policies. For example, Zenity flags agents with excessive permissions, public access to proprietary models, or usage patterns that violate compliance requirements.

But governance isn’t just about knowing what exists, it’s about understanding how it's being used. That’s where Zenity’s AI Observability comes in. Zenity tracks which users interact with which agents, what data is exchanged, and what downstream tools are invoked. This traceability builds the audit trail that regulators are beginning to require, and gives organizations the foundation to enforce responsible AI usage. In Gartner’s terms, Zenity directly supports AI governance by:

  • Maintaining a real-time AI catalog with risk scoring and metadata
  • Enabling policy definition and enforcement across AI agents
  • Supporting attestation, approval workflows, and compliance reporting.

So, simply put, Zenity doesn’t just tell you that AI agents are in use, it gives you the levers to govern effectively.

Layer 2: AI Runtime Inspection and Enforcement - From Black Box to Defensible Usage

The second layer in Gartner’s AI TRiSM pyramid focuses on what happens when AI agents run, specifically, how they behave in product, whether they deviate from expectations, and how violations are detected and mitigated in real time.

Zenity’s AI Detection & Response module is built precisely for this purpose. It inspects every action, looks at engagement and interaction, surfaces suspicious behavior, and enables immediate policy enforcement. Each interaction is broken down into “steps” with metadata - who did what, when, through which client, and what data was accessed.

Zenity then applies a robust, continuously evolving rule engine, mapped to frameworks like OWASP LLM Top 10 and MITRE ATLAS, to detect runtime anomalies. If a user sends a sensitive file to a public GPT, for example, or if an agent executes an unauthorized function, or if a jailbreak attempt is detected, Zenity raises a finding with full context, severity scoring, and recommendations for response. Aligning to Gartner’s Market Guide, Zenity supports runtime enforcement through:

  • Policy-based inspection of agent inputs, outputs, and function calls
  • Real-time threat detection with evidence and incident context
  • Automated or manual enforcement options, such as alerting and blocking

The power of Zenity’s runtime model is its unification with governance. Findings are not isolated events, they link back to posture, ownership, and policy violations, creating a closed-loop system of detection and response.

Layers 3 - 5: Information Governance, Infrastructure, and Traditional Controls - Zenity as a Force Multiplier

Zenity isn’t designed to replace foundational enterprise security controls. It’s designed to work with them. The bottom three layers of the AI TRiSM framework (Information Governance, Infrastructure and Stack, and Traditional Technology Protection) are critical for safeguarding data, enforcing access controls, and protecting workloads at the compute and network layers. But they aren’t built to address the risks posed by autonomous agents acting across cloud platforms, productivity tools, and end-user devices.

Zenity complements these foundational TRiSM layers by:

  • Detecting risky data flows and AI behaviors that bypass traditional DLP, DSPM, or IAM tools (like agents that connect unsanctioned tools to enterprise systems)
  • Highlighting ungoverned agents (or shadow AI) in environments that operate outside formal security oversight, yet still access sensitive business data or trigger downstream automations
  • Flagging vulnerabilities and misconfigurations in agents (like over-permissioned agents, misaligned integrations, etc) that could open up unintentional attack paths.

Zenity acts like a force multiplier by filling the visibility and control gaps; integrated with these tools, exporting runtime findings, posture changes, and violations to existing SOC workflows.

Wrapping Your Head Around Where to Start - DONE!

Gartner’s AI TRiSM framework should act as a playbook, or at least a guide to help you orient yourself on figuring out your AI Agent security strategy. If your current tools cover only the bottom of the pyramid, you’re exposed where it matters most - at the top, where AI agent behaviors are shaped, executed, and potentially exploited. According to Gartner, the top two layers of the framework are consolidating into a distinct market segment - and for good reason. These are the layers that allow enterprises to align usage with business intent, detect violations before they cause damage, and confidently scale adoption.

Whether you’re looking to inspect GPT usage in ChatGPT Enterprise, just trying to figure out what AI agents are running and where, or just need a partner to talk through an appropriate strategy for your organization, Zenity can bring clarity, control and defensibility to the table. We’d love to show you.


All Articles

Related blog posts

Secure Your Agents

We’d love to chat with you about how your team can secure
and govern AI Agents everywhere.

Book Demo