AI Agents have become one of the most discussed emerging technologies in enterprise environments, and now, they’ve captured the attention of policymakers in Washington, DC. Over the past several weeks, a series of developments have brought AI Agents into the national spotlight, particularly through the lens of cybersecurity and regulatory preparedness.
This post summarizes the key updates from the nation’s capital, Zenity’s recent engagements in Washington, in partnership with OpenPolicy, and the steps security leaders should take today to prepare for what’s coming next.
Between June 6 and June 12, Washington, DC became a focal point for discussions on the cybersecurity implications of AI Agents. A new Executive Order, a series of high-level federal meetings, and a Homeland Security subcommittee hearing all emphasized the urgency of understanding and mitigating risks posed by agentic AI systems.
This momentum reinforces what we at Zenity have long believed: while AI Agents hold immense potential for driving efficiency and innovation, their widespread deployment must be accompanied by robust security frameworks and lifecycle oversight. At present, those frameworks are lagging behind adoption, which introduces a gap that demands immediate attention.
On June 6, 2025, the Trump Administration released the Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity Executive Order (EO). While the EO covers a broad range of cybersecurity priorities, it is fundamentally aimed at sustaining and institutionalizing federal efforts to strengthen the nation’s cybersecurity posture by enhancing the detection and mitigation of cyber threats, integrating the management of AI-related software vulnerabilities, and updating sanctions authorities to deter malicious cyber activities targeting the United States and its allies. For artificial intelligence the EO introduces two pivotal directives:
Although AI Agents are not mentioned by name, this EO acknowledges the systemic risks introduced by AI technologies and reflects a growing awareness within the federal government that AI security must evolve beyond content governance and into software assurance, infrastructure hardening, and attack surface reduction.
On June 9–10, as a member of the OpenPolicy Ecosystem, Zenity participated in a series of strategic meetings on Capitol Hill and with federal agency stakeholders facilitated. Our conversations spanned a range of security topics—with a clear and growing interest in the unique risks posed by AI Agents.
Across discussions with majority and minority staff, as well as leaders from NIST and the Office of Management and Budget (OMB), several consistent themes emerged:
This was Zenity’s second policy-focused trip to DC with the OpenPolicy Coalition, and the difference was clear: the conversation is maturing rapidly from conceptual concerns about generative AI to concrete strategies for agentic AI security.
On June 12, the House Committee on Homeland Security held a subcommittee hearing focused on AI security, including specific discussions about AI Agents. The hearing explored both the transformative potential and the critical risks of agentic systems.
One of the most resonant moments came from Jonathan Dambrot, CEO of Cranium AI, who stated:
“Security must be embedded throughout the life cycle of AI agents—from the moment they’re conceived and built, to the moment they’re deployed and run.”
Additionally, in response to a question from Chairman Garbarino about Microsoft’s Copilot Studio and secure agent development, Mr. David Faehl, Microsoft’s Federal Security Chief Technology Officer (CTO), underscored the importance of user-centric guardrails that enable secure configurations by default. This reflects a growing industry consensus that security needs to be accessible, not just enforceable, for all, including the new class of builders composed of business users.
While these recent developments signal progress, there is still a lack of clarity in existing AI security frameworks, especially when it comes to agentic systems. Current regulatory and standards guidance does not directly address how organizations should secure AI Agents throughout their lifecycle.
That, however, is beginning to change. The Executive Order calls for a review and potential update of two foundational NIST publications:
Additionally, the U.S. Executive Branch is expected to release an AI Action Plan in mid-to-late summer, which will likely include new guidance on AI governance, risk management, and security controls. These updates will help close the current standards gap but security leaders cannot afford to wait.
The pace of AI adoption means that most enterprises are already experimenting with agents, often without comprehensive security practices in place. While future standards will help, organizations must take proactive steps today to reduce risk and prepare for eventual compliance.
Security strategies must assume that not every risk can be stopped at the perimeter. Defense-in-depth, with layered controls across architecture, data, and identity, is essential for mitigating threats tied to autonomous, intelligent systems.
AI Agents are not a hypothetical risk; they’re already shaping how work gets done. As Washington begins laying the foundation for formal oversight, security leaders must act with urgency to secure these systems before policy mandates arrive.
By embedding secure-by-design practices today, security leaders have the opportunity to stay ahead of the regulatory curve, not just complying with future governance from Washington, DC, but helping to inform it. Acting now means reducing risk, enabling safe innovation, and positioning your organization as a model for responsible AI adoption.
The agents are coming to Washington but those who move early won’t just meet the moment; they’ll lead it.
All ArticlesIntroduction At Zenity, we're deeply committed to helping our customers harness and embrace the power of Agentic...
The New Kid on the Block - MCP In the ever-evolving landscape of AI, a new enabler has emerged that's quietly...
In the ever-evolving landscape of technology, the allure of AI tools and agents is undeniable. They promise enhanced...
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Book Demo