Advancing AI Security: Zenity’s Contributions to MITRE ATLAS’ First 2026 Update

MITRE ATLAS has become a critical resource for cybersecurity leaders navigating the rapidly evolving world of AI-enabled systems.Traditional threat models are built for human-initiated workflows, APIs, and infrastructure, so they are no longer sufficient to describe modern AI attacks..
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) provides a globally recognized framework for understanding, categorizing, and defending against attacks on AI. For security practitioners, CISOs, and researchers, ATLAS plays a role similar to the MITRE ATT&CK framework, but tailored specifically to AI, machine learning, and agentic systems. It translates abstract AI risks into concrete, actionable techniques that defenders can reason about, monitor, and mitigate.
In the first MITRE ATLAS update of 2026, Zenity researchers contributed substantially to expanding the framework’s coverage of agentic AI threats, reflecting a reality we see across enterprises: AI agents are operational, privileged, and deeply embedded in business workflows. These contributions add clarity and rigor to a threat class that has previously been poorly defined, if not entirely invisible.
Why Agentic Security Matters Now
AI agents differ from traditional AI models in one critical way: they act. Agents can browse the web, invoke tools, access APIs, read and write data, authenticate to services, and make decisions with limited or no human oversight. They are increasingly used in IT operations, customer support, finance, healthcare, and software development, often operating with broad, implicit permissions to “get the job done.”
This autonomy fundamentally changes the attack surface. Rather than simply manipulating inputs or stealing model outputs, adversaries can now target:
- The tools agents use
- The credentials agents rely on
- The data agents consume and generate
- The decisions agents make at runtime
Zenity’s contributions to MITRE ATLAS focus precisely on this new frontier.
New MITRE ATLAS Techniques Introduced with Zenity Contributions
AI Service API (AML.T0096)
Modern AI agents are tightly coupled to service APIs, LLM APIs, orchestration layers, data services, and third-party tools. This technique documents how attackers can exploit AI service APIs as part of broader attack chains by leveraging existing infrastructure that allows bad actors to live off the land, maintaining stealth operations and maintaining persistent access for espionage, reconnaissance, and more.
Zenity also contributed to a new MITRE ATLAS case study: SesameOp (AML.CS0042), which documents a novel backdoor technique leveraging the OpenAI Assistants API for command and control, and served as the inspiration for the AI Service API technique.
Based on publicly documented research and analysis, this case study shows how adversaries can repurpose agent infrastructure as a covert control channel, blending malicious activity into legitimate AI workflows. Rather than relying on traditional C2 infrastructure, attackers can hide in plain sight, using agent APIs, task orchestration, and assistant logic to issue commands and receive responses.
AI Agent Tool Credential Harvesting (AML.T0098)
Agents connect to a variety of tools and data sources that often store or access credentials to perform actions autonomously (think SharePoint, OneDrive). This technique formalizes how attackers can use access to various agents to illicitly retrieve data from available agent tools in order to gather credentials, secrets, API keys, and more.
AI Agent Tool Data Poisoning (AML.T0099)
Bad actors often will try to poison agents by placing malicious or inaccurate content, data, or files on a victim’s system where it can then be invoked by an agent. This can also include prompt injections or phishing attacks that can poison an agent, allowing bad actors to hijack the agent, or have it ‘turn’ on unknowing users as they interact with said agent.
Data Destruction via AI Agent Tool Invocation (AML.T0101)
Bad actors can also use existing capabilities of tools to destroy data and files on specific systems in an effort to disrupt an agent, systems, network infrastructure, services, and more.
AI Agent Clickbait (AML.T0100)
Adversaries may also lure AI browsers into taking unintended actions (clicks, code copies, navigating to certain pages, etc.) by exploiting how agents interpret UI content, visual cues or prompts that are embedded into the sites. In doing so, agent browsers can be led to copying and executing malicious code, sometimes directly into the user’s OS.
Deep Dive: AI Agent Clickbait and the Rise of Agentic Browsers
AI Agent Clickbait (AML.T0100) represents a new class of attack that does not exist in traditional cybersecurity models. It exploits the fact that agents increasingly browse the web, read documents, and interact with user interfaces on behalf of humans.
In human browsing, clickbait relies on curiosity, urgency, or deception. In agentic browsing, the attack surface is even more subtle—and potentially more dangerous.
Agentic browsers and web-enabled agents are designed to:
- Follow links
- Read and summarize content
- Click buttons
- Download files
- Authenticate into portals
- Execute workflows based on what they “see”
As such, attackers can craft web pages, documents, or UI elements specifically optimized to manipulate machine decision-making, not human judgment. Examples include:
- Hidden instructions embedded in HTML or metadata
- Malicious links framed as required next steps
- UI elements that trigger tool invocation
- Content designed to override agent goals or context
Because agents lack human intuition, skepticism, and situational awareness, they may comply with malicious instructions that appear logically consistent or task-aligned. In enterprise environments where agentic browsers are used for procurement, research, ticket resolution, or operations, this creates a powerful new entry point for attackers.
As agentic browsers become more popular and embedded into enterprise copilots, workflow tools, and automation platforms, this attack vector will only grow. AI Agent Clickbait formalizes this risk for defenders, making it visible and actionable for the first time.
What This Means for Cybersecurity Leaders
MITRE ATLAS exists to serve as a practical blueprint for defense. The addition of these agent-focused techniques signals a clear shift in how the industry must think about AI risk.
Security leaders should be asking:
- Where do AI agents have autonomous access today?
- What tools, credentials, and data can they invoke?
- How are agent decisions monitored at runtime?
- What happens if an agent behaves unexpectedly, or maliciously?
Traditional controls like network monitoring, IAM, and application security remain necessary, but they are no longer sufficient on their own. Agentic systems require agent-aware security models; ones that understand goals, tools, context, and decision flow.
Moving Forward
Zenity’s contributions to MITRE ATLAS’ first 2026 update reflect a broader mission: ensuring that agentic AI security is treated as core AI security, not a niche concern. As organizations continue to deploy autonomous agents across the enterprise, frameworks like MITRE ATLAS must evolve, and the security community must evolve with them.
By formalizing these threats, MITRE ATLAS gives defenders a shared language. By contributing to it, Zenity is helping ensure that language reflects the real risks facing modern enterprises.
The era of agentic AI is here. The era of securing it must follow just as quickly.
All ArticlesRelated blog posts

The Genesis Mission: A New Era of AI-Accelerated Science and a New Security Imperative
Innovation has always been the engine of American advancement. With the launch of the Genesis Mission, the White...

Considerations for Microsoft Copilot Studio vs. Foundry in Financial Services
Financial services organizations are increasingly turning to AI agents to drive productivity, automate workflows,...

Claude Moves to the Darkside: What a Rogue Coding Agent Could Do Inside Your Org
On November 13, 2025, Anthropic disclosed the first known case of an AI agent orchestrating a broad-scale cyberattack...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo