The 2025 Gartner Security & Risk Management Summit was a wake-up call, and an opportunity, for anyone responsible for securing the future of AI. With over 1,700 AI use cases now reported across federal agencies and enterprise adoption growing at a breakneck pace, the message was clear: AI is no longer on the horizon. It’s here, it’s active, and it needs securing.
Gartner’s AI Tech Sandwich and TRiSM frameworks made waves. TRiSM, short for Trust, Risk, and Security Management, emerged as the “essential slice” in the AI sandwich, binding together data, platforms, and governance. Whether an enterprise is BYOAI, embedded AI, or building from scratch, TRiSM was framed as non-negotiable.
This approach reflects a growing understanding that securing AI agents isn't just about hardening infrastructure or writing better prompts. It's about building governance directly into how AI is developed and deployed, treating trust and oversight as first-class citizens in the AI lifecycle.
Talks from TIAA and others emphasized a new reality: agentic AI is more than just smart automation, it’s independent, goal-seeking software. These agents are capable of taking initiative, writing their own scripts, and completing multi-step tasks with minimal human oversight.
This brings incredible opportunities for efficiency and scalability. But it also introduces a new class of risks from rogue agent behavior and sensitive data leakage to adversarial prompt injections and misuse of embedded decision-making logic. Many of the sessions focused on how these risks upend traditional insider threat models and require continuous runtime oversight. It is clear that successful AI Agent security programs will need to understand the end to end behavior of agents, and develop new threat models that treat Agents as part of the digital workforce.
If there was a common thread throughout the summit, it was this: security must evolve from static policy to dynamic enforcement. Organizations can no longer rely on security by design alone. With GenAI and agentic systems proliferating, risk isn’t just in how models are trained, it’s in how they reason, act, and behave in real time.
Speakers repeatedly highlighted the need for layered security approaches, clear accountability for AI decisions, and improved visibility into how agents process and act on enterprise data. AI governance is no longer optional, it’s foundational.
At Zenity, we believe this moment marks a fundamental turning point for the enterprise. AI agents aren't just a productivity play, they're a new layer of enterprise infrastructure, one that must be secured just like endpoints, identities, or cloud platforms.
Our mission is to give security teams the tools to govern and protect AI agents across their full lifecycle from build time to runtime.
We’re proud that our research was cited by Gartner, and prouder still of what it represents: a shared commitment to building an AI-powered future that’s as secure as it is innovative.
Gartner laid out the roadmap. We're here to help you walk it.
All ArticlesHighlights from the AI Tinkerers Hackathon and TLV Meetup A Weekend of Builders, Agents, and AI Chaos Three weeks...
Standing on stage at Microsoft Build, surrounded by innovators shaping the future in the era of AI Agents, I felt...
Exciting News! Zenity is thrilled to announce a unique and engaging contest at RSA, taking place at our booth (#S-2057)....
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Book Demo