AI Agents Take DC: Inside Washington’s Developing Agentic Security Agenda

Portrait of Kayla Underkoffler
Kayla Underkoffler
Cover Image

AI Agents have become one of the most discussed emerging technologies in enterprise environments, and now, they’ve captured the attention of policymakers in Washington, DC. Over the past several weeks, a series of developments have brought AI Agents into the national spotlight, particularly through the lens of cybersecurity and regulatory preparedness.

This post summarizes the key updates from the nation’s capital, Zenity’s recent engagements in Washington, in partnership with OpenPolicy, and the steps security leaders should take today to prepare for what’s coming next.

AI Agents Are Gaining Policy Momentum

Between June 6 and June 12, Washington, DC became a focal point for discussions on the cybersecurity implications of AI Agents. A new Executive Order, a series of high-level federal meetings, and a Homeland Security subcommittee hearing all emphasized the urgency of understanding and mitigating risks posed by agentic AI systems.

This momentum reinforces what we at Zenity have long believed: while AI Agents hold immense potential for driving efficiency and innovation, their widespread deployment must be accompanied by robust security frameworks and lifecycle oversight. At present, those frameworks are lagging behind adoption, which introduces a gap that demands immediate attention.

Executive Order Signals a Shift Toward Technical Risk Mitigation

On June 6, 2025, the Trump Administration released the Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity Executive Order (EO). While the EO covers a broad range of cybersecurity priorities, it is fundamentally aimed at sustaining and institutionalizing federal efforts to strengthen the nation’s cybersecurity posture by enhancing the detection and mitigation of cyber threats, integrating the management of AI-related software vulnerabilities, and updating sanctions authorities to deter malicious cyber activities targeting the United States and its allies. For artificial intelligence the EO introduces two pivotal directives:

  1. Elevate attention to vulnerability management for AI systems.
  2. Shift focus from content moderation to technical risk mitigation.

Although AI Agents are not mentioned by name, this EO acknowledges the systemic risks introduced by AI technologies and reflects a growing awareness within the federal government that AI security must evolve beyond content governance and into software assurance, infrastructure hardening, and attack surface reduction.

Zenity in DC: Engaging on the Front Lines of AI Security Policy

On June 9–10, as a member of the OpenPolicy Ecosystem, Zenity participated in a series of strategic meetings on Capitol Hill and with federal agency stakeholders facilitated. Our conversations spanned a range of security topics—with a clear and growing interest in the unique risks posed by AI Agents.

Across discussions with majority and minority staff, as well as leaders from NIST and the Office of Management and Budget (OMB), several consistent themes emerged:

  • The need for secure-by-design AI practices that apply from agent build time through runtime.
  • An interest in real-world examples of agent-based vulnerabilities and threat modeling approaches.
  • A strong desire for technical guardrails to support secure innovation at scale.

This was Zenity’s second policy-focused trip to DC with the OpenPolicy Coalition, and the difference was clear: the conversation is maturing rapidly from conceptual concerns about generative AI to concrete strategies for agentic AI security.

Homeland Security Hearing: AI Agent Security on the Record

On June 12, the House Committee on Homeland Security held a subcommittee hearing focused on AI security, including specific discussions about AI Agents. The hearing explored both the transformative potential and the critical risks of agentic systems.

One of the most resonant moments came from Jonathan Dambrot, CEO of Cranium AI, who stated:

“Security must be embedded throughout the life cycle of AI agents—from the moment they’re conceived and built, to the moment they’re deployed and run.”

Additionally, in response to a question from Chairman Garbarino about Microsoft’s Copilot Studio and secure agent development, Mr. David Faehl, Microsoft’s Federal Security Chief Technology Officer (CTO), underscored the importance of user-centric guardrails that enable secure configurations by default. This reflects a growing industry consensus that security needs to be accessible, not just enforceable, for all, including the new class of builders composed of business users.

What This Means for the Regulatory Landscape

While these recent developments signal progress, there is still a lack of clarity in existing AI security frameworks, especially when it comes to agentic systems. Current regulatory and standards guidance does not directly address how organizations should secure AI Agents throughout their lifecycle.

That, however, is beginning to change. The Executive Order calls for a review and potential update of two foundational NIST publications:

  • NIST SP 800-53 - which outlines security and privacy controls for federal systems, and
  • NIST SP 800-218 - the Secure Software Development Framework (SSDF), which applies broadly to organizations building and deploying software.

Additionally, the U.S. Executive Branch is expected to release an AI Action Plan in mid-to-late summer, which will likely include new guidance on AI governance, risk management, and security controls. These updates will help close the current standards gap but security leaders cannot afford to wait.

What Organizations Should Do Now

The pace of AI adoption means that most enterprises are already experimenting with agents, often without comprehensive security practices in place. While future standards will help, organizations must take proactive steps today to reduce risk and prepare for eventual compliance.

Here’s where to start:

  • Define and document approved AI Agent use cases. This includes specifying the data they can access, their permissions, and intended business outcomes.
  • Build an inventory pipeline for agents. Ensure that every agent created in your environment is automatically logged with details about its function, model, and integrations.
  • Establish a secure-by-default baseline that accounts for all the build processes and technology in your environment (whether “off the shelf” or custom built). Provide developers and business users with templates and configuration guidelines that minimize security missteps.
  • Instrument monitoring and observability. Implement runtime logging, anomaly detection, and integration with your existing SIEM and incident response workflows.
  • Incorporate AI Agents into insider threat programs. Treat them as digital employees with access control policies, privilege boundaries, and behavioral monitoring.

Security strategies must assume that not every risk can be stopped at the perimeter. Defense-in-depth, with layered controls across architecture, data, and identity, is essential for mitigating threats tied to autonomous, intelligent systems.

Conclusion

AI Agents are not a hypothetical risk; they’re already shaping how work gets done. As Washington begins laying the foundation for formal oversight, security leaders must act with urgency to secure these systems before policy mandates arrive.

By embedding secure-by-design practices today, security leaders have the opportunity to stay ahead of the regulatory curve, not just complying with future governance from Washington, DC, but helping to inform it. Acting now means reducing risk, enabling safe innovation, and positioning your organization as a model for responsible AI adoption.

The agents are coming to Washington but those who move early won’t just meet the moment; they’ll lead it.

All Articles

Related blog posts

Secure Your Agents

We’d love to chat with you about how your team can secure
and govern AI Agents everywhere.

Book Demo