Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table

Portrait of Rock Lambros
Rock Lambros
Cover Image

The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now, inside working groups most CISOs have never heard of.

The Coalition for Secure AI (CoSAI), an OASIS Open Project with more than 45 sponsor organizations, including Google, Microsoft, NVIDIA, IBM, and Meta, is producing the frameworks, reference architectures, and secure design patterns that will define how autonomous agents operate inside enterprise environments. Zenity is now a General Sponsor, and I’m joining the Project Governing Board with Chris Hughes as my backup. Here’s why that matters to you.

The Standards Gap

Gartner named agentic AI oversight the number-one cybersecurity trend for 2026. Their 4Q25 forecast projects agentic AI spending overtaking chatbot and assistant spending by 2027 on its way to $752.7 billion by 2029.

Forrester’s 2026 cybersecurity predictions go further, warning that an agentic AI deployment will cause a publicly disclosed breach this year, leading to employee terminations. Two analyst firms, different methodologies, same conclusion.

The spending numbers tell only half the story. The other half is a governance vacuum. According to a Gartner poll of 147 CIOs and IT leaders from May 2025, 24% of respondents had already deployed AI agents, another 50% were actively experimenting, and 17% planned to deploy by the end of 2026. That means 91% of organizations are somewhere on the agentic AI adoption curve. The security standards and design patterns those organizations need to deploy agents safely? They’re still being drafted.

This is the gap we’ve spent the last two years working to close from the practitioner side, through OWASP and MITRE ATLAS. CoSAI is where that practitioner work connects to the open specifications enterprises will actually operationalize.

What CoSAI is Building

CoSAI operates under OASIS Open, which means its outputs carry the governance rigor and intellectual property protections that enterprise procurement and legal teams require. OWASP produces community-driven guidance. MITRE ATLAS catalogs adversarial techniques. Both are indispensable references. CoSAI takes the next step as it produces open specifications and design patterns with formal governance, contributor license agreements, and a Project Governing Board, where every sponsoring organization has an equal vote on what becomes an official work product.

The coalition organizes its work across four workstreams.

  • Workstream 1 extends SLSA provenance to AI models.
  • Workstream 2 develops defender frameworks for AI-driven threats.
  • Workstream 3 focuses on AI security risk governance.
  • Workstream 4, launched in mid-2025, focuses on secure design patterns for agentic systems, including threat models, reference architectures, and the security implications of agent-to-agent interactions.

Workstream 4 has already published the “Principles for Secure-by-Design Agentic Systems” and the MCP Security whitepaper, which identify 12 core threat categories and nearly 40 distinct threats across the Model Context Protocol.

To CISOs evaluating their agentic AI governance posture: are you designing your agent security architecture around ad-hoc vendor guidance, or around the open specifications that the industry’s largest technology companies are converging on?

Why Zenity, and Why Now?

If you’ve followed Zenity’s standards work, this move won’t surprise you. If you haven’t, here’s the short version: we’ve spent the last two years building the practitioner evidence base that CoSAI’s workstreams will operationalize.

On the OWASP side, Zenity’s CTO, Michael Bargury, co-leads the AI Vulnerability Scoring Standard (AIVSS) within the OWASP GenAI Security Project. I serve as a core team member of the Agentic Security Initiative, with Chris Hughes serving as a distinguished expert and reviewer. I am also a project author of the OWASP AI Exchange.

When OWASP released the Top 10 for Agentic Applications in December 2025, it reflected input from more than 100 security researchers and practitioners, including direct contributions from Zenity on agent-specific threats and mitigations. Zenity’s own Keren Katz co-led the effort with Kayla Underkoffler leading entry number one: ASI01 Agent Behavior Hijack. Microsoft’s agentic failure modes now reference the initiative’s threat and mitigation document. NVIDIA’s Safety and Security Framework for Real-World Agentic Systems draws on the initiative’s threat modeling guide. GoDaddy deployed the initiative’s Agentic Naming Service proposal to production.

On the MITRE ATLAS side, Zenity researchers have contributed 14 agent-focused techniques and subtechniques to the framework, along with multiple case studies, including the SesameOp case study (AML.CS0042) that documents a novel backdoor technique leveraging the OpenAI Assistants API for command and control. The October 2025 collaboration expanded ATLAS from traditional model-centric attacks to execution-layer threats specific to autonomous agents.

Before Zenity’s contributions, ATLAS had limited coverage of the ways agents interact with enterprise infrastructure. The techniques, context poisoning, memory manipulation, thread injection, and others, now give security teams a shared language for threats they’ve been observing in production deployments without a formal taxonomy.

These two bodies of work form the analytical foundation. CoSAI is the operational layer. The Agentic Top 10 tells you what to worry about. MITRE ATLAS tells you how attackers execute. CoSAI’s workstreams translate both into the design patterns, governance models, and reference architectures your engineering teams can implement. That combination, what I call the standards credibility stack, is what gives a security vendor’s guidance weight when you’re evaluating it against your enterprise requirements.

What a PGB seat actually means

CoSAI’s Project Governing Board includes one voting representative from each sponsoring organization: Google, Microsoft, NVIDIA, IBM, Meta, PayPal, Amazon, Anthropic, OpenAI, Cisco, Wiz, Intel, and dozens more. The board is co-chaired by David LaBianca from Google and Omar Santos from Cisco. Every sponsor, Premier, and General alike gets an equal vote on official work products. Zenity’s vote on the specifications that govern agentic AI security carries the same weight as any hyperscaler’s. OASIS governance ensures the outputs reflect a broad consensus, not any single company’s preferences.

I’m not joining to observe. The hyperscalers bring a platform-level perspective. The model providers bring inference-layer insight. What’s been less represented is the view from inside the enterprise agent deployment itself, where agents authenticate to services, invoke tools, make decisions, and interact with sensitive data using credentials and permissions that traditional IAM models were never designed to govern. That’s the perspective I’ll bring to Workstream 3 and Workstream 4.

The Competitive Reality You Should Understand

The CoSAI member list is public. If you scroll through the General Sponsors, you’ll recognize names from across the agentic AI security market. Multiple companies that compete directly with Zenity already have seats at this table, contributing to workstreams and influencing the specifications that enterprises will reference in RFPs and security architecture reviews.

The competitive dynamic within CoSAI reveals something important; however, the vendors who specialize in this space have independently concluded that open standards participation isn’t optional. The problems are too new, the attack surfaces too complex, and the pace of agentic adoption too fast for any single company to credibly define the governance model alone.

For CISOs evaluating vendors, they don’t want to know whether a vendor participates in standards bodies. They want to know what they’re contributing. Can they point to specific techniques in MITRE ATLAS that their research produced? Can they trace their product capabilities back to threat models they helped build? When a CoSAI workstream publishes a reference architecture, does the vendor’s approach align because they helped write it, or because they retrofitted their marketing to match? These are the diligence questions that separate vendors with genuine standards credibility from those who list a logo on a membership page.

Where the Industry is Converging

Multiple signals are pointing in the same direction. The Cloud Security Alliance launched CSAI at RSA Conference 2026 with a mission specifically focused on “Securing the Agentic Control Plane.” CSAI immediately announced a collaboration with CoSAI, including a seat on the Technical Steering Committee. Different organizations, different governance structures, different funding models, all arriving at the same conclusion that agentic systems require purpose-built security frameworks.

Gartner’s Top Strategic Technology Trends for 2026 identified AI Security Platforms as one of the most critical emerging technologies, predicting that more than 50% of enterprises will use them by 2028. They also predict that 70% of AI applications will use multi-agent systems by 2028, which means the complexity of agent-to-agent interaction is about to compound exponentially.

Real incidents have already materialized: Asana’s tenant isolation flaw affected up to 1,000 enterprises, WordPress plugins exposed over 100,000 sites to privilege escalation through MCP-based interactions, and researchers demonstrated prompt injection attacks through support tickets that exposed private database tables.

If your security architecture for AI agents doesn’t reference at least the OWASP Agentic Top 10, the relevant MITRE ATLAS techniques, and CoSAI’s secure-by-design principles, you’re operating without the shared language that the rest of the industry is adopting.

What I’ll Contribute to Workstream’s 3 & 4

In Workstream3, the risk governance workstream, the current challenge is translating AI-specific risk categories into the language that enterprise risk committees already use. CISOs don’t need another risk taxonomy. They need one that maps to existing reporting structures, board-level risk metrics, and compliance obligations. That translation requires someone who has sat in the CISO seat, presented to audit committees, and negotiated risk acceptance decisions with business unit leaders. I’ve done that across a 30-year career and outlined it in my book, “The CISO Evolution: Business Knowledge for Cybersecurity Leaders.”

In Workstream 4, the gap is in runtime governance. An AI agent with access to your CRM, email, and financial systems represents a decision-making node with the authority to chain actions across systems at speeds human reviewers can’t match. Gartner’s Avivah Litan captured this in her guardian agents research, noting that “humans cannot keep up with the potential for errors and malicious activities” as enterprises adopt multi-agent systems.

The design patterns for governing that behavior, credential scoping, tool invocation policies, context integrity validation, and anomaly detection across agent decision chains are what Workstream 4 needs to codify next. Zenity’s production telemetry across enterprise agent deployments provides the empirical foundation for that work.

What you Should do With this Information

Don’t treat this blog as a vendor announcement. Treat it as a signal about where the industry is heading and where you need to position your security program.

First, audit your current agentic AI security framework references. If your agent governance model doesn’t incorporate the OWASP Top 10 for Agentic Applications, map it this quarter. If your threat modeling for AI systems doesn’t include MITRE ATLAS agent-specific techniques, your threat models have blind spots. If your security architecture reviews don’t reference CoSAI’s Principles for Secure-by-Design Agentic Systems, add them.

Second, assess your standards alignment. The specifications CoSAI is producing will increasingly appear in enterprise RFPs, regulatory guidance, and analyst evaluation criteria. Ask your vendors which standards bodies they contribute to and, more importantly, what they’ve contributed. A membership badge without substantive technical contributions is a marketing asset, not a credibility signal.

Third, engage directly. CoSAI’s technical participation is free and open to all developers. You don’t need to be a sponsor to contribute to workstreams, review drafts, or submit feedback. The GitHub repositories for Workstream 3 and Workstream 4 are public. The mailing lists are open. If agentic AI security is relevant to your organization, the cost of participation is your time. The cost of sitting out is deploying agents on a governance framework you had no role in shaping.

Key Takeaway: The open specifications that will govern agentic AI security across the enterprise are being written now, and the companies shaping those specifications are the ones deploying and securing agents in production today.

Zenity is contributing production-derived intelligence on agentic AI threats to the same standards bodies that enterprises use to evaluate their security posture. If you want to understand how our platform capabilities align with the OWASP Agentic Top 10, MITRE ATLAS agent techniques, and CoSAI’s emerging design patterns, start at zenity.io and explore how we’re translating standards participation into the security controls your agent deployments need.

All Articles

Secure Your Agents

We’d love to chat with you about how your team can secure and govern AI Agents everywhere.

Get a Demo