Bridging Innovation and Policy: Zenity’s Strategic Discussions in Washington DC

Representing Zenity in Washington DC
I recently had the fantastic opportunity to represent Zenity in a round of strategic discussions with legislative and policy leaders in Washington DC as a part of the OpenPolicy Coalition. Zenity has recently partnered with OpenPolicy and joined the ecosystem in the effort to bridge the gap between bleeding edge innovation and traditional policy.
A Personal Journey to the Capital
As a west coast girl born and raised, the first opportunity I had to go to Washington, DC came through a youth organization when I was in middle school. Following that trip, I didn’t return to DC until I joined the Marine Corps and ended up being stationed in Quantico, VA. So, it was a special opportunity for me to return to the capital in my civilian career to contribute to the critical topics that drive National Security.
Engaging with Congress on Critical Topics
During a 2-day “fly-in” with the OpenPolicy team, we met with 11 different Members of Congress from both the House of Representatives and the Senate, as well as their Staff teams to discuss critical matters surrounding topics like AI Security, Critical Infrastructure, and Post Quantum Cryptography. Each discussion started with an overview of OpenPolicy and the representatives from each company. There was a vast range of topics the team could cover in conversation, so the first goal was to understand what our audience cared about, because their range could be just vast. Topics of interest from the Member teams included AI security and the emerging threats presented by rapidly evolving technologies such as AI Agents and other Non-Human Identities, the need to adopt Post Quantum Cryptography, and Operational Technology and Critical Infrastructure security needs.
Building Core Security Baselines for AI
When it came to AI Security topics, generally, the tone of the conversations revolved around building core security baselines while simultaneously enabling adoption of AI capabilities. Our shared perspective was that without basic security guidelines in place, adoption would be slowed, if not completely blocked. This is consistent with what we see in the private sector as well. As we discussed the basics, we fell back on the foundations of security as generally missing in the existing policy landscape, down to the level of ensuring an inventory of high value assets. Most organizations still struggle with just that step – creating and maintaining an inventory. So, while we were discussing bleeding edge technology, the truth is, security without an understanding of the attack surface leaves unimaginable gaps, and that’s as basic as it gets. We encourage policy makers to consider implementing a baseline security standard for AI capabilities being adopted by the US Government.
Addressing AI Agents and Insider Threats
With AI Agents being top of mind for many folks I speak with in the security community, I was glad to be a resource in these discussions for the security focus placed on agentic systems, given their nuances and specific risks. There are the new threats introduced by AI Agents in the environment, and we cannot forget or exclude those agents being built through citizen development platforms that enable both professional and citizen developers to build and customize agents. Just as we have established secure development standards for code-based development, this democratization of building AI Agents that not only have sweeping access to internal data, but also can reason and act based on their own guardrails, deserves attention through security standards and policy. As this is a relatively new topic for policymakers (as it is in the private sector), there is likely to be a need for further education and understanding of the landscape and the threats that go hand in hand.
Another critical topic that emerged in multiple discussions was Insider Threat. While the practice of Insider Threat is primarily established and in place within large enterprises, the emphasis is built around human users. With the introduction of Agentic AI, a new dimension has been added to this paradigm. In order to assess the insider threat risk posed by AI Agents, they must be included into the established Insider Threat programs that exist today. While AI Agents are designed to behave like humans (and in fact they often assume the identity of the human user), there are unique factors that make it critical to not only include them as part of established Insider Threat programs, but to develop models that encapsulate the essence of what agentic systems do. This is certainly a call to action for policymakers and standards bodies alike.
Bipartisan Efforts and Future Collaborations
Throughout every conversation we had over the two day period, the common theme of bipartisan efforts was ever present. This was a positive note for me personally, especially when the bipartisan efforts are rooted in the agreement that both private and public sectors can gain so much from the embrace of AI. The conversations we had were largely based on the theme of security working within the organization to enable AI, and not be seen as a blocker; which is also a nice brand refresh for how security teams are seen.
I was encouraged to believe that there will be no rest when it comes to the pursuit of National Security and ensuring that the United States presents a strong front when it comes to cybersecurity and emerging threats. Contributing as a technical resource for policymakers and standards bodies alike is a critical effort for Zenity, and we look forward to the partnership with the OpenPolicy Coalition to make a positive impact on the future of cybersecurity standards around the globe.