2025 Gartner SRM Summit: From Gatekeeper to Enabler. How Security Leaders Can Embrace AI Agents with Confidence

The 2025 Gartner Security & Risk Management Summit was a wake-up call, and an opportunity, for anyone responsible for securing the future of AI. With over 1,700 AI use cases now reported across federal agencies and enterprise adoption growing at a breakneck pace, the message was clear: AI is no longer on the horizon. It’s here, it’s active, and it needs securing.
TRiSM Takes Center Stage
Gartner’s AI Tech Sandwich and TRiSM frameworks made waves. TRiSM, short for Trust, Risk, and Security Management, emerged as the “essential slice” in the AI sandwich, binding together data, platforms, and governance. Whether an enterprise is BYOAI, embedded AI, or building from scratch, TRiSM was framed as non-negotiable.
This approach reflects a growing understanding that securing AI agents isn't just about hardening infrastructure or writing better prompts. It's about building governance directly into how AI is developed and deployed, treating trust and oversight as first-class citizens in the AI lifecycle.
AI Agents and the Rise of Autonomy
Talks from TIAA and others emphasized a new reality: agentic AI is more than just smart automation, it’s independent, goal-seeking software. These agents are capable of taking initiative, writing their own scripts, and completing multi-step tasks with minimal human oversight.
This brings incredible opportunities for efficiency and scalability. But it also introduces a new class of risks from rogue agent behavior and sensitive data leakage to adversarial prompt injections and misuse of embedded decision-making logic. Many of the sessions focused on how these risks upend traditional insider threat models and require continuous runtime oversight. It is clear that successful AI Agent security programs will need to understand the end to end behavior of agents, and develop new threat models that treat Agents as part of the digital workforce.
Securing Agents Requires Shift Everywhere
If there was a common thread throughout the summit, it was this: security must evolve from static policy to dynamic enforcement. Organizations can no longer rely on security by design alone. With GenAI and agentic systems proliferating, risk isn’t just in how models are trained, it’s in how they reason, act, and behave in real time.
Speakers repeatedly highlighted the need for layered security approaches, clear accountability for AI decisions, and improved visibility into how agents process and act on enterprise data. AI governance is no longer optional, it’s foundational.
The Zenity Perspective: Securing AI Agents Everywhere
At Zenity, we believe this moment marks a fundamental turning point for the enterprise. AI agents aren't just a productivity play, they're a new layer of enterprise infrastructure, one that must be secured just like endpoints, identities, or cloud platforms.
Our mission is to give security teams the tools to govern and protect AI agents across their full lifecycle from build time to runtime.
We’re proud that our research was cited by Gartner, and prouder still of what it represents: a shared commitment to building an AI-powered future that’s as secure as it is innovative.
Gartner laid out the roadmap. We're here to help you walk it.
All ArticlesRelated blog posts

When the League Assembles: What to Expect at the AI Agent Security Summit San Francisco
A Community That Set the Standard When we assembled the community in New York for the first AI Agent Security...

Bridging AI Safety and AI Security: Reflections from the NYC AI Safety Meetup
The regularly occurring NYC AI Safety Meetups cover a variety of topics, with this latest session focusing on the...

AI Agent Security Summit: Assembling to shape the Future of Autonomous Defense
Enterprises are rapidly deploying AI agents that don’t just process data, they act. These agents connect to email,...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo