Advancing MITRE ATLAS AI Security Through Zenity’s Contributions

Portrait of Andrew Silberman
Andrew Silberman
Cover Image

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally recognized AI security framework that catalogs adversarial techniques targeting artificial intelligence systems. Similar in structure to MITRE ATT&CK but purpose-built for AI, machine learning, and agentic systems, ATLAS translates abstract AI risks into concrete, actionable attack techniques that security teams can monitor and mitigate.

As AI becomes embedded in business workflows, MITRE ATLAS AI security has shifted from a research-oriented framework to a practical operational resource. Traditional threat models were designed for human-initiated workflows, static applications, and infrastructure-centric attacks. They do not fully account for autonomous agents that invoke tools, access data dynamically, and operate continuously across systems.

The first MITRE ATLAS update of 2026 reflects this evolution. As agentic AI systems increasingly act across tools, identities, memory, and services, ATLAS expands its taxonomy to capture attack paths that emerge at the orchestration and execution layers. This progression reinforces a critical reality for cybersecurity leaders: modern AI risk cannot be understood through legacy models alone. It requires frameworks that describe how adversaries target autonomy, delegation, and runtime behavior in real-world enterprise environments.

In the first MITRE ATLAS update of 2026, Zenity researchers contributed substantially to expanding the framework’s coverage of agentic AI threats, reflecting a reality we see across enterprises: AI agents are operational, privileged, and deeply embedded in business workflows. These contributions add clarity and rigor to a threat class that has previously been poorly defined, if not entirely invisible.

Why Agentic Security Matters for Enterprise Risk

AI agents differ from traditional AI models in one critical way: they act. Agents can browse the web, invoke tools, access APIs, read and write data, authenticate to services, and make decisions with limited or no human oversight. They are increasingly used in IT operations, customer support, finance, healthcare, and software development, often operating with broad, implicit permissions to “get the job done.”

This autonomy fundamentally changes the attack surface. Rather than simply manipulating inputs or stealing model outputs, adversaries can now target:

  • The tools agents use
  • The credentials agents rely on
  • The data agents consume and generate
  • The decisions agents make at runtime

Zenity’s contributions to MITRE ATLAS focus precisely on this new frontier.

New MITRE ATLAS Techniques Shaped by Zenity Contributions

AI Service API (AML.T0096)

Modern AI agents are tightly coupled to service APIs, LLM APIs, orchestration layers, data services, and third-party tools. This technique documents how attackers can exploit AI service APIs as part of broader attack chains by leveraging existing infrastructure that allows bad actors to live off the land, maintaining stealth operations and maintaining persistent access for espionage, reconnaissance, and more.

Zenity also contributed to a new MITRE ATLAS case study: SesameOp (AML.CS0042), which documents a novel backdoor technique leveraging the OpenAI Assistants API for command and control, and served as the inspiration for the AI Service API technique.

Based on publicly documented research and analysis, this case study shows how adversaries can repurpose agent infrastructure as a covert control channel, blending malicious activity into legitimate AI workflows. Rather than relying on traditional C2 infrastructure, attackers can hide in plain sight, using agent APIs, task orchestration, and assistant logic to issue commands and receive responses.

AI Agent Tool Credential Harvesting (AML.T0098)

Agents connect to a variety of tools and data sources that often store or access credentials to perform actions autonomously (think SharePoint, OneDrive). This technique formalizes how attackers can use access to various agents to illicitly retrieve data from available agent tools in order to gather credentials, secrets, API keys, and more.

AI Agent Tool Data Poisoning (AML.T0099)

Bad actors often will try to poison agents by placing malicious or inaccurate content, data, or files on a victim’s system, where it can then be invoked by an agent. This can also include prompt injections or phishing attacks that can poison an agent, allowing bad actors to hijack the agent, or have it ‘turn’ on unknowing users as they interact with said agent.

Data Destruction via AI Agent Tool Invocation (AML.T0101)

Bad actors can also use existing capabilities of tools to destroy data and files on specific systems in an effort to disrupt an agent, systems, network infrastructure, services, and more.

AI Agent Clickbait (AML.T0100)

Adversaries may also lure AI browsers into taking unintended actions (clicks, code copies, navigating to certain pages, etc.) by exploiting how agents interpret UI content, visual cues, or prompts that are embedded into the sites. In doing so, agent browsers can be led to copying and executing malicious code, sometimes directly into the user’s OS.

AI Agent Clickbait and the Rise of Agentic Browsers

AI Agent Clickbait (AML.T0100) represents a new class of attack that does not exist in traditional cybersecurity models. It exploits the fact that agents increasingly browse the web, read documents, and interact with user interfaces on behalf of humans.

What AI Agent Clickbait Exploits in Agentic Browsing Behavior

In human browsing, clickbait relies on curiosity, urgency, or deception. In agentic browsing, the attack surface is even more subtle—and potentially more dangerous.

Agentic browsers and web-enabled agents are designed to:

  • Follow links
  • Read and summarize content
  • Click buttons
  • Download files
  • Authenticate into portals
  • Execute workflows based on what they “see”

How Adversaries Manipulate Interfaces and Content for AI Agents

As such, attackers can craft web pages, documents, or UI elements specifically optimized to manipulate machine decision-making, not human judgment. Examples include:

  • Hidden instructions embedded in HTML or metadata
  • Malicious links framed as required next steps
  • UI elements that trigger tool invocation
  • Content designed to override agent goals or context

Hypothetical example: An enterprise deploys an internal agentic browser to automate vendor research for procurement teams. While reviewing supplier documentation, the agent encounters a web page containing hidden instructions embedded in page metadata that appear task-relevant. The agent follows the instruction, downloads a file, and invokes an internal tool to process it. An unauthorized workflow is unintentionally executed that traditional web security controls never inspect, because no human interaction occurred.

Why Agentic Browsers Increase Enterprise Exposure

Agents lack human intuition, skepticism, and situational awareness. They may comply with malicious instructions that appear logically consistent or task-aligned. This creates a powerful new entry point for attackers in enterprise environments where agentic browsers are used for procurement, research, ticket resolution, and operations.

Internal workflows are becoming increasingly more reliant on embedded agentic browsers in enterprise copilots and automation platforms. Attacks that manipulate how agents interpret interfaces move from edge cases to systemic security risks.

What MITRE ATLAS AI Security Means for Cybersecurity Leaders

MITRE ATLAS exists to serve as a practical blueprint for defense. The addition of these agent-focused techniques signals a clear shift in how the industry must think about AI risk.

Security leaders should be asking these questions, which are no longer theoretical exercises but a practical way to assess how AI agents operate, what they can access, and where unseen risk may already exist.

  • Where do AI agents have autonomous access today?
  • What tools, credentials, and data can they invoke?
  • How are agent decisions monitored at runtime?
  • What happens if an agent behaves unexpectedly, or maliciously?

Answering them requires visibility into agent behavior at runtime, including how agents use tools, credentials, and context as they make decisions inside enterprise systems.

Traditional controls like network monitoring, IAM, and application security remain necessary, but they are no longer sufficient on their own. Agentic systems require agent-aware security models; ones that understand goals, tools, context, and decision flow.

The Future of Agentic AI Security and MITRE ATLAS

Zenity’s contributions to MITRE ATLAS’ first 2026 update reflect a broader mission: ensuring that agentic AI security is treated as core AI security, not a niche concern. As organizations continue to deploy autonomous agents across the enterprise, frameworks like MITRE ATLAS must evolve, and the security community must evolve with them.

For security leaders, this means treating agent-aware AI security as a foundational capability, not an optional enhancement, as autonomous systems become inseparable from core enterprise operations.

By formalizing these threats, MITRE ATLAS gives defenders a shared language. By contributing to it, Zenity is helping ensure that language reflects the real risks facing modern enterprises.

The era of agentic AI is here. The era of securing it must follow just as quickly.MITRE ATLAS AI Security FAQs How does the 2026 MITRE ATLAS update change AI threat modeling priorities?

The 2026 update shifts focus from model-centric attacks to execution-layer exposure. Threat modeling now must account for autonomous workflow chaining, delegated authority persistence, and API-level orchestration risk. Security teams can no longer limit assessments to model inputs and outputs; they must evaluate how AI systems interact with enterprise infrastructure over time.

How can organizations operationalize MITRE ATLAS inside an existing security program?

Rather than treating ATLAS as a reference document, mature teams map ATLAS techniques to:

  • Existing detection telemetry
  • Identity governance controls
  • API monitoring systems
  • SaaS activity logs
  • Incident response playbooks

This creates traceability between AI-specific attack patterns and concrete defensive coverage gaps.

What visibility gaps does MITRE ATLAS help uncover?

MITRE ATLAS often reveals blind spots in:

  • Agent-to-agent interactions
  • Cross-tenant API invocation
  • Embedded browser automation
  • Delegated service account usage
  • Autonomous task escalation paths

By aligning enterprise controls to ATLAS techniques, organizations can identify where AI activity bypasses traditional monitoring layers.

How should CISOs communicate MITRE ATLAS-aligned AI risk to boards?

Board-level discussions should focus on:

  • Scope of autonomous system deployment
  • Degree of delegated authority in production
  • Exposure concentration in high-privilege SaaS ecosystems
  • Alignment of AI governance controls to recognized frameworks

Referencing MITRE ATLAS provides a standardized vocabulary that reduces ambiguity when describing emerging AI risk.

Does MITRE ATLAS help quantify AI security maturity?

Indirectly, yes. Organizations can assess maturity by evaluating:

  • Percentage of relevant ATLAS techniques mapped to monitoring controls
  • Incident response readiness for AI-specific scenarios
  • Agent inventory completeness
  • Coverage of AI-related API telemetry

This enables structured benchmarking rather than ad-hoc AI risk assessments.

How does MITRE ATLAS influence vendor evaluation for AI security platforms?

Security leaders increasingly assess whether vendors:

  • Map detections to ATLAS techniques
  • Provide visibility into agent behavior
  • Correlate identity, tool invocation, and data access
  • Support investigation workflows aligned with agentic attack patterns

Framework alignment is becoming a procurement signal of AI security capability depth.

What long-term trend does the 2026 update signal for enterprise AI defense?

The update reflects a broader shift: AI security is converging with identity security, API security, and SaaS governance. As AI systems embed deeper into operational workflows, defensive strategies must integrate across these domains rather than treating AI as an isolated technology category.

Why is framework alignment important as AI regulation evolves?

Regulators increasingly expect demonstrable governance over automated decision systems. Using established frameworks such as MITRE ATLAS supports defensible security posture documentation and demonstrates proactive risk modeling tied to industry-recognized standards.

All Articles

Secure Your Agents

We’d love to chat with you about how your team can secure and govern AI Agents everywhere.

Get a Demo