RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Key Takeaways:
- Agentic AI security has emerged as the defining cybersecurity challenge at RSA 2026 and in global policy discussions.
- There is a critical gap in comprehensive government frameworks covering the full AI agent lifecycle across environments.
- Policy lag is riskier than ever, as agentic AI is rapidly embedded in critical infrastructure like healthcare, finance, and defense.
- Current efforts (US NIST, UK, Singapore, Spain, etc.) are fragmented and exploratory, lacking unified strategy.
- Governments must deliver holistic, lifecycle-based policies and scale industry best practices through coordinated public-private frameworks.
Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.
What the Meetings Revealed
Over the course of these engagements, I met with stakeholders spanning government agencies, multilateral bodies, and some of the most consequential technology companies shaping the future of AI deployment. The conversations were candid, forward-looking, and, at times, sobering.
The central takeaway: There is a significant and growing gap in overarching agentic AI security policy frameworks — frameworks that holistically account for managing agentic risks across cloud, endpoint, and homegrown environments. Individual initiatives exist, but no government has yet produced a comprehensive, lifecycle-spanning approach to agentic AI security risk management.
RSA 2026: Agentic AI Security Takes Center Stage
If there was any doubt that agentic AI security has arrived as a mainstream concern, RSA Conference put that to rest. AI security was the core emerging theme of the show, in keynotes, in the Innovation Sandbox, in vendor booths, and in the hotel lobby/coffee shop conversations that often matter most. The industry has clearly recognized that autonomous, multi-agent AI systems introduce a qualitatively different threat surface than traditional software or even earlier generations of AI tooling.
And yet, despite the urgency radiating from the private sector, governments remain largely at a loss for how to address these risks systematically.
There are several reasons for policy latency. First, policy has always lagged behind technical innovation. Governments are hesitant to regulate a technology whose impact has not yet been fully realized or researched. The EU AI Act serves as a cautionary tale to some about the drawbacks of an ambiguous regulatory objective. This is a positive perspective to take on hard, restrictive regulatory policy, whose impact would have market implications.
However, it is important that governments take a more forward-leaning approach to soft policy setting government objectives for agentic AI security. In this technology space, the window between deployment and potential harm is compressed dramatically, which means the cost of policy lag is no longer linear; it can be exponential.
Another reason governments have not yet taken up the agentic AI security banner is their uncertainty about government use cases for this technology. How will ministries and agencies actually use AI agents, and what is their responsibility vis-à-vis the critical infrastructure providers they oversee? Finally, how does this Agentic AI deviate from current government approaches to mitigating risks posed by new technologies? Again, these are fair questions. I would argue that, in the case of agentic AI security, critical infrastructure dependency is emerging before governance. The internet took decades to become critical infrastructure. The cloud took years. Agentic AI is being embedded into critical systems (healthcare, finance, defense, energy, etc.) on a timeline of months. The dependency curve is arriving ahead of the governance curve by a wider margin than we've seen before, and the consequences of a systemic failure in those sectors are categorically more severe than, say, a social media content moderation failure.
Perhaps most importantly, the norms, architectures, and standards being established today will lock in the trajectory of agentic AI security for years to come. With the internet, we could point to moments like the lack of authentication in early TCP/IP, the permissive defaults of early browsers, where foundational insecurity got baked in and proved extraordinarily difficult to remediate. We are at that moment for agentic AI right now. The argument for policy isn't that lag is unusual, it's that this particular lag, at this particular moment, will be uniquely hard to recover from.
Where Governments Are — and Aren't
To be fair, some governments are actively exploring best practices, and those efforts deserve recognition:
- Singapore has moved with characteristic deliberateness, producing an Model AI Governance Framework for Agentic AI through IMDA that begins to define the contours of this problem space.
- The United States has initiated a NIST CAISI Agentic AI Security Request for Information, signaling that federal agencies are at least beginning to ask the right questions. NIST is also currently soliciting comments on Applying Identity Standards and Best Practices to AI Agents, due April 2nd, 2026.
- The United Kingdom, through its AI Security Infrastructure Call for Information, is in an earlier phase,working to better understand the use cases involved in deploying and subsequently securing these technologies before prescribing solutions.
- The Spanish Data Protection Agency released a white paper on Agentic Artificial Intelligence from the Perspective of Data Protection, taking into account common characteristics of AI agents, such as autonomy, environmental perception, action-taking, proactivity, planning and reasoning, and memory and adaptability.
These are meaningful steps. But they are either largely ad-hoc, exploring only a limited slice of the overall risk landscape posed by agentic AI, or more exploratory in nature, seeking to better understand the scope of the challenges facing governments and critical infrastructure.
The Fragmentation Problem
This fragmentation mirrors a pattern playing out in the private sector as well. Many companies are pursuing agentic AI security capabilities, but these efforts tend to be siloed. Efforts are focused on discrete portions of the agent lifecycle, such as operation, deployment, testing and evaluation, monitoring, governance, or scoped to particular verticals, platforms, or environments, such as, cloud/SaaS, large language models, or edge/endpoint.
That's not a criticism of any individual effort. Specialized, deep work in each of these areas is genuinely valuable. In fact, we need more rigorous research and best practice development in areas that remain underexplored, including:
- Security software development for agents. How do you build secure-by-design agentic systems? There are several existing government efforts that prioritize security in software development (NIST SSDF, EU Cyber Resilience Act, UK SbD approach, Singapore’s Infrastructure Protection Act, etc.), but none of these provide specific insights on implementing security when developing AI agents.
- Agent-to-agent discovery. How do agents find, authenticate, and establish trust with one another at scale? This is a technical challenge that will require both cross-industry and multilateral engagement to be able to address.
- Dynamic policy evolution. How do governance frameworks adapt in real time as agent capabilities and behaviors change? Agentic AI introduces intent and autonomy into the technological deployment equation, both of which cannot be addressed through static security mechanisms.
The problem is not the depth of individual efforts, it's the absence of a connective framework that ties them together into a coherent, actionable whole.
What Government Policy Needs to Do
Governments need to develop policy that accomplishes two things simultaneously:
First, provide a high-level, authoritative framework outlining the full scope of agentic AI security risk management that spans the entire agent lifecycle, cuts across deployment environments, and is legible to both technical practitioners and senior decision-makers.
Second, actively surface and elevate industry best practices in order to accelerate adoption. The private sector is generating real knowledge about what works. Policy can serve as the transmission mechanism that scales those insights.
The right policy architecture likely involves three mutually reinforcing components:
- Statutory framework: establishing baseline requirements and authority, potentially beginning as an NDAA amendment and maturing into durable executive branch policy, with analogous approaches adopted by other leading government ministries/departments.
- Technical best practices: developed and maintained by bodies like NIST, DSIT, CSA Singapore, and ENISA, kept current with the pace of technological change (to the extent possible).
- Industry coordination: formal and informal mechanisms for public-private collaboration, threat intelligence sharing, and iterative feedback between practitioners and policymakers.
The Bottom Line
RSA Conference 2026 made clear that agentic AI security is no longer a niche concern for a handful of forward-leaning researchers. It is the central frontier of AI risk, and the private sector knows it.
Government policy needs to take notice. Not with point solutions or one-off inquiries, but with the kind of durable, comprehensive frameworks that can provide structure to an ecosystem that is currently building critical infrastructure without a shared map.
The conversations I had this past week gave me reason for cautious optimism. The awareness is there. The urgency is building. Now we need the policy ecosystem to match.
All ArticlesRelated blog posts

My First RSA: Agents, Challenges, and Community
I am no stranger to conferences, and certainly no stranger to security conferences. Over the years, BlackHat and...

Inside the Movement Defining AI Security: AI Agent Security Summit Now On-Demand
I’m still buzzing from the AI Agent Security Summit in San Francisco just a few weeks ago! From hallway discussions...

The League Assembled: Reflections from the AI Agent Security Summit
At the AI Agent Security Summit in San Francisco, some of the brightest minds in AI security and top industry leaders...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo