Remote Copilot Execution

Worried About Security Implications of Microsoft Copilot?

You’ve come to the right place to prevent Remote Copilot Execution (RCE) and promptware

Unique Challenges

Imagine a world where SQL injection attacks can be performed in natural language AND on the most powerful apps we’ve ever seen. No account compromise required.

As demonstrated at BlackHat 2024 by our CTO Michael Bargury, hackers can easily perform RAG poisoning and indirect prompt injection leading to remote code execution attacks to fully control Microsoft Copilot and other AI apps. In the race to get AI in the hands of all business users, security teams are left with four distinct challenges:

AI is Extremely Powerful

Whether its end users interacting with an enterprise copilot like Copilot for M365, or building their own on Copilot Studio, AI gains sweeping access to your data on your behalf, to be used at its discretion

Everyone Uses AI

Nearly every large enterprise already leverages Copilot, giving business users access to corporate data, and over 10,000 organizations use Copilot Studio to enable anyone to build their own copilots

Controls are Irrelevant

AppSec tools focused on code scanning can’t help address the new attack surface that AI introduces, and least privilege and data classification controls are easily circumvented

Prompt Injection

When bad actors interact with copilots they can trick it into giving up control… and data with malicious prompts leading to remote copilot execution and promptware

Scenarios

What Can Possibly Go Wrong?

RAG poisoning and RCE attacks allow for bad actors to remotely control copilots. Given copilot’s vast access, this effectively means compromising employee accounts via something as simple as sending them an email. These attacks can poison datasets, intercept prompts and gain access to huge amounts of sensitive data and identities. We see this playing out in the below scenarios:

Scenario 1

Automate spear phishing to find Copilot collaborators, find interactions, and craft responses to get someone to click a malicious link

Scenario 2

An external hacker gets an RCE on the copilot interaction of a user in the finance department right before an earnings call. 

Scenario 3

An external hacker gets an RCE to have Copilot provide users with the attacker’s malicious phishing site, when the user asks for navigation guidance

In these attack scenarios, bad actors can get Microsoft Copilot to do whatever they want, placing your data, identities, and enterprise at risk. 

The Solution

What Can Be Done?

Security teams need to focus their energy on implementing an AI Trust Layer to prevent RCEs and data poisoning by building an AppSec program for all AI apps and copilots, including capabilities to:

Identify any copilots that are over-exposed on the public internet that can be ripe for prompt injection

Detect jailbreak and prompt injection attacks in real-time

Detect RAG poisoning and neutralize hidden instructions before they can impact copilot conversations

Thwart RCE attacks, identify malicious AI agents, supply chain compromise, and data exfiltration

Want to assess your risk?

If you’re looking to kickstart your enterprise copilot security program, schedule a free assessment now!