Making Sense of AI in Cybersecurity
Unless you have been living under a rock, you have seen, heard, and interacted with Generative AI in the workplace. To boot, nearly every company is saying something to the effect of “our AI platform can help achieve better results, faster,” making it very confusing to know who is for real and who is simply riding the massive tidal wave that is Generative AI. This is only exacerbated in cybersecurity where we find a plethora of companies touting AI capabilities, and/or the ability to lend a hand in securing and governing AI. However, this extraneous noise muddies the issue of what is top of mind for so many CISOs and security teams, which is enabling the business to use Generative AI themselves in order to optimize productivity, automate processes, and much more.
In this blog, we’ll make some distinctions between the different types of AI security to help you make sense of it all so you can improve security without hindering business processes.
AI-Driven Security
The first, and likely the most common approach that security teams are taking to AI is verbiage around how their products, services, and platforms are driven by AI. AI-driven platforms are ones that use AI to do things like:
- Prioritize alerts based on end-user behavior
- Remediate vulnerabilities in real-time
- Detect anomalies
While important, the vast majority of these tools do not actually aid in the security of the AI tools and platforms themselves, such as Microsoft Copilot Studio, Salesforce, ServiceNow, OpenAI, and more. What is clear is the value of Gen AI, where enterprise teams are injecting it into their own products and services, which leads to the next needed layer of security.
Securing the AI Models Themselves
Other security teams are focused on securing the actual AI model or tool itself. If someone is able to ‘poison the well’ of data that the AI model is processing in order to function, the answers and functionality of the AI can be skewed. Securing an AI or large language model (LLM) involves implementing measures to protect it from various potential threats. Here are some key considerations for a security team focusing on securing the AI/LLM itself:
- Access Control. This includes limiting access to the AI/LLM to authorized personnel only, and implementing authentication methods to ensure that only authorized users can interact with and modify the model
- Data Security. For security teams, this means protecting the training data used to develop the AI/LLM, ensuring it is kept confidential and secure. This can be done with encryption for data at rest and in transit to prevent unauthorized access.
- Architecture Security. This will entail regular updates and patches of the AI/LLM’s underlying software and libraries to address known vulnerabilities.
- Monitoring and Logging. Within the AI/LLM platform, security and governance teams can Implement monitoring and logging mechanisms to track model usage and identify any suspicious activities, including alerts for unusual patterns or behaviors that may indicate a security threat.
- Privacy Compliance. Last but not least, ensuring compliance and privacy helps to show that the AI/LLM complies with privacy regulations and standards, especially if it involves processing sensitive or personal data. Security teams can accomplish this by Implementing anonymization and data minimization techniques to reduce privacy risks.
By addressing these aspects, a security team can enhance the overall security posture of the AI/LLM and minimize the risks associated with its deployment and operation. However, these controls help to secure the actual AI/LLM itself, but fail to take into account the things that people can build using AI.
Securing Apps, Automations, and Connections
As Generative AI takes enterprises by storm, more and more people are using Generative AI in their development processes. This is no longer just limited to professional developers either, as citizen developers are building apps, automations, data flows, and more by simply asking a Copilot to build it. And much like when someone uses ChatGPT for assistance in writing, say, a blog, or a sales email, an application will require human inspection after the auto-generation in order to make sure it fits the context, processes, and critically, security protocols.
Here are some things to consider for security teams when trying to integrate application security controls to the world of Generative AI.
- Identify each and every resource that is being built with the help of Gen AI. These require special attention and education for end-users.
- Make sure that apps and automations that require access to sensitive data are tagged accordingly, and fitted with appropriate access, identity, and anomaly detection tools.
- Assess each individual resource for risk to help security teams prioritize alerts, violations, and more.
- Ensure that each app is only shared with the right people, as a lot of low-code and no-code platforms have default permissions that enable everyone throughout the tenant or directory to access and use apps created by citizen developers.
- Implement guardrails to ‘shift left’ and meet developers where they are to ensure that as they are building things with Gen AI, that they meet organizational requirements for security.
This is a new frontier that continues to evolve, seemingly by the day. Here at Zenity, we’re focused on helping our customers empower professional and citizen developers to build useful apps, automations, and business processes without needing to write custom code. As Generative AI makes its way into all of these different platforms, the need for centralized visibility, risk assessment, and governance has never been more important to keep up with the speed and volume of business-led development. Come chat with us to learn how we might be able to help!