Navigating AI Agent Security Amid Evolving Regulations

The landscape of artificial intelligence (AI) governance is undergoing significant changes, particularly as it relates to the rise of AI Agents—autonomous systems that can independently make decisions and execute tasks. Recently, a key executive order on AI safety was rescinded, which previously required developers to share safety test results with federal agencies and mandated comprehensive assessments of AI-related risks. This policy shift transfers the responsibility for AI safety from federal oversight to individual organizations. According to a recent Zenity report, organizations now use an average of seven different AI and low-code platforms, resulting in the deployment of approximately 80,000 apps per enterprise compared to just 500 SaaS apps in similar organizations. 

Simultaneously and conversely, the European Union is advancing the EU AI Act, which aims to regulate AI technologies comprehensively. This act categorizes AI applications into risk levels, imposing stringent requirements on high-risk systems. Organizations operating within or collaborating with EU entities must stay informed about these developments to ensure compliance.

Organizations are now tasked with not only securing these agents but also ensuring compliance with an evolving regulatory landscape. With legislation shifting rapidly across regions, determining where to begin can be challenging. In this blog, we aim to provide clarity and actionable insights to help navigate these complexities in securing AI Agent.

Implications for Organizations

With the absence of federal mandates in the U.S. and the introduction of rigorous regulations in the EU, organizations must proactively address AI Agent security and compliance on their own. Relying solely on external guidelines is no longer sufficient. One of the key risks outlined in Zenity’s report is that business users, often with little coding experience, can create AI Agents and applications without adequate security guardrails. This opens the door to misconfigurations that can lead to the compromise of sensitive data or enterprise systems. For instance, Zenity Labs uncovered how security mechanisms in tools like Copilot 365 can be bypassed to create malicious hyperlinks, showing how even minor gaps in AI agent security can lead to exploitable vulnerabilities. Rather than blocking these tools, enterprises should implement robust internal processes to identify and mitigate risks specific to AI Agents effectively. 

Proactive Measures for AI Agent Security

  1. Comprehensive Risk Assessment: Regularly evaluate AI agents as they are introduced to the enterprise, to identify potential vulnerabilities. Understanding the specific risks associated with your AI agents is crucial for implementing effective safeguards. Zenity Labs has recently identified real-world attack paths on AI systems, centered around prompt injection, reconnaissance, targeted RAG poisoning, and more. They also have identified risks directly linked to the citizen development motion with the OWASP Top 10 for low-code/no-code platforms. Common misconfigurations when building AI Agents include authorization misuse, authentication failures, and improper data handling. Addressing these risks early is critical to preventing larger security incidents and remaining compliant.
  2. Continuous Monitoring: Implement mechanisms to detect and respond to anomalies in AI agent behavior promptly. This ensures that any deviations are addressed before they escalate into significant issues. Recent discoveries, such as Zenity Labs’ research into Remote Copilot Execution (RCE) in AI agents like Microsoft 365 Copilot, highlight the importance of robust monitoring to identify and mitigate potential exploitation vectors. It also underscores how critical having comprehensive audit trails are to remain compliant with any current or future legislation. 
  3. Employee Training: Educate your workforce about AI agent security best practices. An informed team is better equipped to recognize and mitigate potential threats arising from AI integration. By involving all teams, not just developers and IT, everyone can see and harness the power of AI, but can do so in a way that aligns with corporate policy and when relevant, regulatory matters. 

Leveraging Zenity’s Security Assessment Hub

Due in large part to the ever-changing security and compliance landscape that comes with the territory of a rapidly evolving technology like AI, Zenity created the Security Assessment Hub. The Hub is home to 10 free, open-source tools to help security, trust and safety, AI, and Data Science teams identify and understand immediate risks:

  • Identify Risks: Surface critical risks and threats within AI Agents and low-code/no-code platforms, providing valuable insights into potential vulnerabilities. For example, Copilot Hunter, which was launched at BlackHat 2024, illuminates any agents or bots that are publicly accessible to the internet which can dramatically increase an organization’s attack radius.
  • Support Governance Efforts: The insights provided by the tools in the Hub can inform policies and practices to strengthen oversight and decision-making specific to AI agents.
  • Assist Compliance Initiatives: Highlight risks that could lead to non-compliance, offering a foundational step toward addressing regulatory requirements for AI Agents.

As regulations around AI evolve, organizations must take a proactive stance on securing AI Agents. By conducting thorough risk assessments, continuously monitoring AI Agents, and educating employees, enterprises can safeguard their operations. Tools like Zenity’s Security Assessment Hub empower organizations to identify vulnerabilities and take the first critical step toward building secure and compliant AI Agent systems.

Subscribe to Newsletter

Keep informed of all the latest news and notes within the world of securing and governing citizen development

Thanks for registering to the Zenity newsletter.

We are sure that you will like it and are looking forward to seeing you again soon.