Why Detection? Why Now? Key Takeaways from the NIST NCCoE Public COI Working Session

In April, I had the amazing opportunity to participate in a unique AI security event put on by the National Cybersecurity Center of Excellence (NCCoE). The April event was all about getting the community together to discuss what a Cyber AI Profile should look like as an overlay to the NIST Cybersecurity Framework (CSF) 2.0.
As crazy as it sounds, conducting a deep dive into the unique needs introduced by AI as it relates to standards and frameworks is basically my jam. So now, with the continuation of this effort, I was more than happy to participate in another round of conversation focusing on how the overlay could work to help practitioners secure AI system components within their organization, and that’s just what a group of 300+ folks did.
Two Themes Emerged
Detection is the Key
First, there was a clear uptick in the perceived importance of Detection as a core function of AI security. This was a marked difference from the first conversation held in April. To set some background, within the Cybersecurity Framework the Detection function includes the categories of Continuous Monitoring and Adverse Event Analysis, and is defined as: “Possible cybersecurity attacks and compromises are found and analyzed”. So, why did Detection come up now, but wasn’t seen as that important in April? My theory is that we have officially stepped into the realm of operational AI Agents. And since the purpose of the work session was to discuss how the profile should be used to secure AI System Components, it stands to reason that AI Agents be taken into account.
Back in April, Agents were mentioned as a part of the AI System overall, however they weren’t prevalent and relatable enough to really emphasize the needs they uniquely present. Specifically, the need for detection when it comes to monitoring what an AI Agent is up to in a network has become critical. And at the same time, the actual detection methodologies are only just being truly developed. Monitoring the behavior and actions of AI Agents in the environments they roam around in requires different lenses and tools than other technologies. This is due to the vast web that agents can spin autonomously in order to achieve their goal. We will be better equipped to monitor, detect, and respond to AI Agent activity when we accept that Agents are more on par with the human users we secure than traditional SaaS applications and technologies.
Stakeholders Must Be Identified and Aligned
Another topic point where I had a varied opinion to some of the other attendees, was within the Govern function. To paraphrase, the question was which of the categories does AI introduce a significant change to? The Govern categories are: Organizational Context; Risk Management Strategy; Roles, Responsibilities, and Authorities; Policy; Oversight; and Cybersecurity Supply Chain Risk Management. From what we observe working with organizations today, Organizational Context actually requires much more attention when it comes to AI in the organization than meets the eye. But… why?
By CSF 2.0 definition, Organizational Context includes “The circumstances — mission, stakeholder expectations, dependencies, and legal, regulatory, and contractual requirements — surrounding the organization’s cybersecurity risk management decisions are understood”. What we see today is that AI requires corralling many stakeholders that may not have been as closely aligned to risk strategy before. We know that the implementation of AI is a top down initiative. From the board and top level executives down to the operator ranks. If the Organization hasn’t come together to define the “circumstances” of the AI adoption and implementation strategy, there’s little to no success of the rest of the story falling in line. Organizations need shared stakeholder alignment on AI use cases, on the projected ROI of implementing AI, of the workforce adjustments needed to successfully put all the required places together. If these components aren’t aligned, how is the security leader in the organization supposed to build a risk strategy? Risk strategies built on generic technology implementations miss the key components of context.
So, while most of the choices were functions like Risk Management Strategy that required the most attention with AI implementation, my view is that if the goals and mission of using AI aren’t set within the context of the organization to start, the rest of the functions turn into theoretical exercises.
Parting Thoughts
Just believe me when I say that 4 hours of in-depth conversation like the above flew by with the group that participated in the conversation. I know I’m looking forward to future collaborations initiatives with NIST, and I greatly appreciate NIST’s commitment to seeking direct input from the practitioners who implement these controls and strategies on a daily basis. If you’re interested in learning more and participating, there are a few initiatives NIST is putting on currently. Follow the NCCoE Cyber AI Profile workstream for updates on this initiative. Also, there’s an effort to develop an AI overlay for NIST SP 800-53 Rev 5 that’s ongoing. This is another critical effort that I’m grateful to see the open engagement for and encourage others to get involved in.
All ArticlesRelated blog posts

2025 Gartner SRM Summit: From Gatekeeper to Enabler. How Security Leaders Can Embrace AI Agents with Confidence
The 2025 Gartner Security & Risk Management Summit was a wake-up call, and an opportunity, for anyone responsible...

Sparking the Future of AI Security: From AI Observability to Automated Response
Highlights from the AI Tinkerers Hackathon and TLV Meetup A Weekend of Builders, Agents, and AI Chaos Three weeks...

Securing the future of AI Agents: Reflections from the Microsoft Build Stage
Standing on stage at Microsoft Build, surrounded by innovators shaping the future in the era of AI Agents, I felt...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo