0Click Attacks: When TTPs Resurface Across Platforms

Portrait of Greg Zemlin
Greg Zemlin
Cover Image

If there’s one lesson security teams should take from recent disclosures, it’s this: AI agent attack techniques don’t disappear - they resurface, across vendors and platforms, with only small variations. What researchers called out months ago is showing up again, now in Salesforce as the ForcedLeak vulnerability.

The Image Rendering Technique

On June 18, 2023, Johann Rehberger demonstrated how a Bing agent could be compromised via prompt injection, enabling data exfiltration through image rendering. It has been more than two years, and the technique remains relevant today.

Introduction to the 0click Compromise

At Black Hat USA 2024, Zenity CTO Michael Bargury showed how Microsoft 365 Copilot could be steered to exfiltrate sensitive data, phish users, and move laterally inside an environment using everyday interactions and prompt manipulation. The live research showed how easy it is to turn a routine assistant use into a 0click compromise.

EchoLeak: 0-click Exfiltration Against M365 Copilot

On June 11, 2025, Aim Labs published a vulnerability called EchoLeak against Microsoft 365 Copilot. The attack showed how a crafted email could poison Copilot’s context and exfiltrate data by embedding sensitive content into a markdown image URL. By leveraging a trusted Microsoft domain, the payload bypassed Copilot’s Content Security Policy, and when rendered, the client automatically sent a request to the attacker-controlled endpoint - leaking data with zero user interaction.

For a deeper dive into the TTPs demonstrated in EchoLeak, see the analysis in: A Reminder That AI Agent Risks Are Here to Stay.

Prompt Mines Against Salesforce Einstein

In August this year at Black Hat USA, Zenity revealed several AI agent vulnerabilities spanning multiple platforms. Among them was Prompt Mines, an attack targeting Salesforce: by exploiting unauthenticated web-forms like Web-to-Case or Email-to-Case, attackers can plant hidden instructions in CRM records. These instructions stay silent until a user query brings them into Einstein’s context, where they can trigger record corruption or manipulation.

ForcedLeak: Same Techniques Target Salesforce Agentforce

On September 25, 2025, Noma Security published ForcedLeak, an exploit chain targeting Salesforce Agentforce. Researcher Sasi Levi, from Noma Security, demonstrated how a malicious payload could be injected through Web-to-Lead, remain hidden in CRM records, and then execute when a user asked Agentforce a related question, mirroring the trigger described in the Prompt Mines writeup. Once triggered, the agent exfiltrated sensitive information by embedding it into a markdown image URL. As in EchoLeak, the exfiltration relied on bypassing Content Security Policy and abusing trusted domains - but in this case, this was achieved by purchasing an expired trusted domain to capture the data.

The Recurring TTPs

Once again, we see the same TTPs used in ForcedLeak:

  • Retrieval Content Crafting: building a payload that is structured to manipulate Agentforce when later retrieved.
  • Acquire Infrastructure: purchasing an expired, previously trusted domain and setting up an endpoint to listen to incoming requests.
  • RAG Poisoning: submitting the crafted payload into Web-to-Lead records contaminated the CRM retrieval corpus.
  • LLM Prompt Injection: the stored payload contained indirect instructions that, when retrieved, caused Agentforce to package and exfiltrate CRM data.
  • LLM Jailbreak: the injected instructions bypassed Agentforce guardrails, forcing the model to perform behaviors that exposed sensitive data.
  • Abuse Trusted Sites: the exploit relied on purchasing an expired domain whitelisted by the Content Security Policy.
  • Image Rendering: the payload encoded CRM values into a markdown image URL so rendering triggered an automatic HTTP request that transmitted the data to the attacker’s domain.

The last two are especially critical, and will likely keep resurfacing as long as agents are allowed to render or call out to “safe” external resources.

Why this problem won’t go away

The uncomfortable truth is that these risks cannot be patched away completely. As long as AI agents mix untrusted input, retrieval systems, and resource rendering, attackers will keep combining those primitives into new exploits. That is why organizations need a dedicated security layer around agents rather than relying on vendor fixes alone. Monitor the specific TTPs in the AI Agents Attack Matrix and ensure your controls provide coverage for the relevant techniques at each stage of an attack. We invite the community to contribute new techniques and procedures to this open-source framework so the catalog stays current and our collective defenses grow stronger.

All Articles

Secure Your Agents

We’d love to chat with you about how your team can secure and govern AI Agents everywhere.

Get a Demo