
Generative AI capabilities continue to make their way into every organization, with increasingly useful ways of helping employees and contractors be more productive. This includes advancing how fully automated vulnerability remediation works, and with the power of generative AI, is able to take into account unique environments and uses in real-time.
While copilots, such as ones introduced by Microsoft, GitHub, Salesforce, and more, provide awesome power to both professional and citizen developers to create and generate context-aware code completion and suggestions that save time, there are also ‘hallucinations’ where wrong information can be spewed and inserted into applications, or flat-out wrong suggestions that, when blindly trusted, can lead organizations astray. As with anything, new technological capabilities must be met with strong processes and informed people to fully harness the power.
However, finding the right balance is critical, and can often be too heavily weighted on one side or the other behind productivity and control. Traditional security controls like urgent patches, encryption, and WAFs are necessary, but can also lag productivity, particularly in the fast-paced world of application development. On the people side, having centralized application security teams often see problems in isolation and are unable to see organizational consequences of applying specific fixes to a specific application (nor should they be expected to).
Further, making environment-level changes can have sweeping effects on individual applications, and each decision to increase security must be weighed against the potential dips in efficiency or productivity. AI-generated mitigations can reduce the cost of remediation, but the risk of applying ill-advised or ill-fitting mitigations will always exist. It begs the question of how much organizations should be expected to trust AI copilots, with many needing to run deep analysis of where in the middle-ground they want to live.
Read more from my latest monthly DarkReading column here.
All ArticlesRelated blog posts

Why Soft Guardrails Get Us Hacked: The Case for Hard Boundaries in Agentic AI
One recurring theme in my research and writing on agentic AI security has been the distinction between soft guardrails...

AI Agent Governance: The CISO Checklist for the New AI Agent Reality
AI Agent Governance Is Now a CISO-Level Priority AI agents are rapidly becoming embedded in enterprise workflows,...

PerplexedBrowser: Accepting a Meeting or Handing Your Local Files to an Attacker?
Note: This post is part of a coordinated disclosure by Zenity Labs detailing the PleaseFix vulnerability family...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo