Who is Securing the Apps Built by Generative AI?

generative AI challenges security

The rise of low-code/no-code platforms has empowered business professionals to independently address their needs without relying on IT. Now, the integration of generative AI into these platforms further enhances their capabilities and eliminates entry barriers. However, as everyone becomes a developer, concerns about security risks arise.

Business users have already begun utilizing generative AI tools, such as ChatGPT, to expedite tasks like writing PR pitches and prospecting emails. While data governance and legal obstacles hinder enterprise adoption, business users are incorporating generative AI into their daily operations without waiting for approval. On the other hand, developers have been using generative AI to write and enhance code through tools like GitHub Copilot. Developers play a crucial role in this process, requiring technical expertise to evaluate the generated code and integrate it into existing systems.

This disparity between business professionals and developers highlights the need for low-code/no-code platforms to bridge the gap. By acting as translators between generative AI and business users, low-code/no-code platforms generate applications and automations that can be easily evaluated and adjusted by business professionals. Major low-code/no-code vendors have already introduced AI copilots that generate applications based on text inputs, and analysts predict significant growth in low-code/no-code development with AI assistance. Furthermore, these platforms facilitate easy integration with enterprise environments, enabling access to data and operations.

The convergence of low-code/no-code and AI empowers business professionals and moves us closer to a future where every interaction with AI results in a tangible application that integrates into business workflows and can be shared among users. However, the increasing number of applications created by business users poses security challenges. While security teams traditionally focus on applications developed by IT, the shift towards citizen development necessitates a new approach. Instead of attempting to ban citizen development or demanding approval for every application and data access, a better solution involves providing business users with a safe environment to leverage generative AI and low-code/no-code. This entails implementing automated guardrails that handle security concerns silently, allowing business users to focus on pushing the business forward while ensuring security and mitigating risks.

Read more from our CTO, Michael Bargury, on his monthly DarkReading column here.

Subscribe to Newsletter

Keep informed of all the latest news and notes within the world of securing and governing citizen development

Thanks for registering to the Zenity newsletter.

We are sure that you will like it and are looking forward to seeing you again soon.