
Key Takeaways
- The EU AI Act is the world's first comprehensive AI regulation. It takes a risk-based approach, organizing AI systems into four tiers: unacceptable, high, limited, and minimal risk.
- Its purpose is to ensure AI systems used in the EU are safe, transparent, non-discriminatory, and subject to meaningful human oversight.
- The EU AI Act prohibits specific AI practices outright, including social scoring, subliminal manipulation, and most uses of real-time remote biometric identification in public spaces.
- The EU AI Act applies extraterritorially: if your AI system's output is used in the EU, you're in scope, regardless of where your organization is headquartered.
- CISOs play a critical role in driving compliance, from AI inventory and risk classification to vendor governance and runtime controls.
As AI agents take on increasingly autonomous roles across the enterprise, the regulatory environment is moving fast to catch up. The European Union's Artificial Intelligence Act (The EU AI Act) is the world's first comprehensive legal framework for AI regulation, and its reach extends well beyond European borders. For CISOs and security leaders, understanding the EU AI Act isn't just a compliance checkbox, it's a foundational requirement for governing AI systems responsibly and protecting the organization from both legal and operational risk.
This article breaks down what the EU AI Act is, why it was created, how it categorizes risk, what it prohibits, and whether it applies to organizations outside the European Union. If your organization deploys, develops, or purchases AI systems, this is recommended reading.
Understanding the EU AI Act
The EU AI Act entered into force on August 1, 2024, establishing a legally binding regulatory framework for AI systems across the European Union. Unlike narrower regulations that govern specific sectors or use cases, the EU AI Act applies horizontally across industries and AI types. Its goal is to ensure that AI systems deployed in the EU are safe, transparent, traceable, non-discriminatory, and subject to human oversight.
The EU AI Act's risk-based architecture was deliberately designed to balance innovation with protection. Not every AI application carries the same potential for harm, and the regulation reflects that. A spam filter and a predictive policing algorithm are both AI systems, but they warrant very different levels of scrutiny. The EU AI Act codifies that distinction into law.
Full applicability of most provisions is set for August 2, 2026, though enforcement is already underway in phases:
- February 2025: Prohibitions on unacceptable AI practices came into effect.
- August 2025: Governance rules and obligations for general-purpose AI (GPAI) models became applicable.
- August 2026: Most remaining high-risk AI system requirements take effect.
- August 2027: Rules for certain high-risk AI systems embedded in regulated products have an extended transition period through this date.
For CISOs and security leaders, the phased timeline isn't a reason to wait. The compliance obligations for high-risk systems are substantial, requiring risk management frameworks, technical documentation, data governance practices, human oversight mechanisms, and conformity assessments. Building that readiness takes time.
The Purpose Behind the Regulation
The EU AI Act was built on a recognition that AI systems, left ungoverned, can cause real harm to individuals, to fundamental rights, and to societal trust. The European Parliament's stated priorities were clear. AI systems used in the EU should be safe, transparent, traceable, non-discriminatory, and overseen by humans rather than running fully on autopilot.
Three driving concerns shaped the regulation:
- Fundamental rights protection: AI systems that make or influence decisions about people — who gets hired, who receives credit, who is flagged by law enforcement — carry real potential to discriminate, manipulate, or cause lasting harm. The regulation draws hard boundaries around the most dangerous use cases.
- Trust and transparency: For AI to be adopted responsibly, the people interacting with AI systems, whether as users, subjects, or operators, need to understand what those systems are doing. The EU AI Act embeds transparency obligations across risk tiers.
- Market integrity: By establishing consistent requirements across the EU, the EU AI Act creates a level playing field and reduces the risk of regulatory arbitrage, where organizations deploy AI in whichever jurisdiction has the weakest rules.
For security leaders, the EU AI Act's purpose maps closely onto what good AI governance looks like in practice: inventory your AI systems, understand what they do and how they decide, ensure humans remain in control of consequential decisions, and build accountability into every layer of the AI lifecycle.
The Risk-Based Framework: Four Tiers of AI Classification
The EU AI Act organizes AI systems into four risk tiers. Each tier carries different compliance obligations, and understanding where your AI systems fall is the starting point for any compliance program.
Unacceptable risk: prohibited AI practices
At the top of the risk hierarchy are AI practices so dangerous, or so contrary to fundamental values, that the EU has banned them outright. These prohibitions became applicable on February 2, 2025, with penalties enforceable from August 2025. This category is covered in detail in the next section.
High risk: heavily regulated
High-risk AI systems are the most significant category for most enterprises. These are AI systems that pose meaningful potential harm to health, safety, or fundamental rights. They include AI used in:
- Critical infrastructure, such as energy grids, water supply, and transportation safety components.
- Medical devices and clinical decision-support tools.
- Recruitment, candidate screening, and employment-related assessments.
- Educational assessments and access to educational institutions.
- Law enforcement and predictive policing.
- Access to essential services such as credit, insurance, and social benefits.
- Remote biometric identification systems, such as those used in law enforcement or large-scale identity verification contexts.
- Border control and migration management, including AI used to assess traveler risk, process asylum and visa applications, and detect or identify individuals at borders.
- Administration of justice.
Providers and deployers of high-risk AI systems must implement robust risk management processes, maintain detailed technical documentation, ensure data governance practices minimize bias, design for human oversight, conduct conformity assessments before deployment, register systems in an EU database, and monitor systems post-deployment.
These aren't lightweight requirements. For CISOs overseeing enterprise AI adoption, this means building AI governance capabilities that extend from procurement through production.
Limited risk: transparency obligations
AI systems in this tier, including chatbots and AI-generated content tools that interact directly with users, must meet transparency requirements. Operators must inform users they're interacting with an AI system. For deepfakes and other AI-generated synthetic media, disclosure obligations apply.
Minimal risk: unregulated
The majority of AI applications in commercial use today fall into this category. Spam filters, AI-enabled video games, and recommendation engines face no mandatory compliance obligations under the EU AI Act. That said, the growing use of agentic AI systems means organizations should reassess their AI inventory regularly. A system's risk classification can change as its capabilities and deployment context evolve.
General-purpose AI models
The EU AI Act includes a dedicated framework for GPAI models: large foundation models capable of performing a broad range of tasks and integrated into downstream applications. Providers of GPAI models must maintain technical documentation, comply with EU copyright law, and publish a summary of training data content. Models that pose systemic risk face additional requirements, including adversarial testing, incident reporting, and cybersecurity safeguards.
What the EU AI Act Prohibits
Article 5 of the EU AI Act establishes eight AI practices considered fundamentally incompatible with EU values and human dignity. These practices are banned entirely, with no path to compliance. They became applicable on February 2, 2025, with penalties enforceable from August 2025. Organizations must ensure their AI systems don't engage in any of the following:
- Subliminal manipulation and deceptive techniques. AI systems that deploy subliminal techniques beyond a person's conscious awareness, or that use purposefully manipulative or deceptive methods to distort behavior in ways that cause harm.
- Exploitation of vulnerabilities. AI that exploits the vulnerabilities of specific groups, such as children, people with disabilities, or those in economically precarious situations, to distort behavior in ways that may cause harm.
- Social scoring systems. AI that evaluates or classifies individuals based on their social behavior, personal characteristics, or inferred traits to produce scores that are then used to treat those individuals unfavorably in unrelated contexts or in ways that are unjustified or disproportionate.
- Predictive policing by profiling. AI used to assess or predict the likelihood of an individual committing a criminal offense based solely on profiling or the evaluation of personality traits and characteristics.
- Mass facial recognition databases. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or closed-circuit TV footage.
- Emotion inference in sensitive contexts. AI systems that infer emotions in the workplace or in educational institutions, with limited exceptions for safety purposes.
- Biometric categorization based on sensitive attributes. AI systems that categorize individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Real-time remote biometric identification in public spaces. Law enforcement use of live biometric identification systems in publicly accessible spaces is prohibited, with narrow, strictly defined exceptions.
For CISOs, these prohibitions are a direct call to action: conduct an audit of every AI system in your environment, including third-party tools and AI capabilities embedded in enterprise platforms. If any system exhibits these behaviors, even as a secondary function, it needs to be remediated or removed before it creates legal exposure. Not sure where to start? Try this checklist for AI agent governance.
Does the EU AI Act Apply Outside the EU?
This is the question many security leaders in the United States, the United Kingdom, and other non-EU markets are asking. The short answer: yes, if your AI system's output is used in the EU.
The extraterritorial scope of the EU AI Act is deliberately designed to mirror the approach taken by the EU's General Data Protection Regulation (GDPR). Under Article 2, the EU AI Act applies to:
- Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where those providers are established.
- Deployers of AI systems that have their place of establishment or are located in the EU.
- Providers and deployers established outside the EU where the output produced by the AI system is used in the EU.
That third point is the one most organizations outside Europe underestimate. The trigger isn't where your company is incorporated, where your servers sit, or whether you have a physical presence in Europe. It's whether your AI system generates an output such as a decision, a recommendation, a score, a piece of generated content, that is used inside the EU.
Consider these scenarios:
- A US-based HR platform that screens job candidates for European employers.
- A UK-based SaaS company whose AI-driven fraud detection tool is deployed by a German bank.
- A Canadian AI provider offering APIs consumed by EU developers.
All of these are in scope. Non-EU providers of high-risk AI systems are also required to designate an authorized representative within the EU under Article 22. That representative assumes responsibility for ensuring the provider's compliance obligations are met within the EU's jurisdiction.
The compliance stakes are material. Penalties for violations of prohibited AI practices reach up to €35 million or 7% of global annual turnover, whichever is higher. That's a higher ceiling than GDPR. Violations of high-risk system obligations carry fines of up to €15 million or 3% of global annual turnover. For multinationals and global SaaS providers, these aren't hypothetical numbers.
Given the EU AI Act's anticipated influence on global AI governance, much like how GDPR reshaped global data privacy practices, security leaders should treat EU AI Act compliance not as a regional concern, but as a component of their global AI risk management strategy.
The CISO's Role in EU AI Act Compliance
The EU AI Act places significant compliance obligations on both providers and deployers of AI systems. CISOs sit at the intersection of both roles in most enterprise environments: overseeing the security and governance of AI tools the organization uses internally, while advising on the governance of AI systems deployed to customers or integrated into products.
Key areas where security leaders drive compliance:
- AI inventory and classification. Understanding what AI systems exist across the enterprise, including Shadow AI adopted by individual teams without formal approval, and classifying each by risk tier under the EU AI Act. Detection after deployment isn't governance.
- Vendor and supply chain governance. Assessing AI providers and embedding EU AI Act compliance requirements into procurement and vendor contracts, particularly for tools that may qualify as high-risk under the EU AI Act.
- Risk management and documentation. Building the risk management frameworks, technical documentation, and data governance practices that the EU AI Act mandates for high-risk systems. Security teams already have many of the foundational capabilities; the work lies in extending them to cover AI-specific requirements.
- Human oversight design. Ensuring that high-risk AI systems are designed and deployed with meaningful human oversight mechanisms, not just in theory, but in practice. The agent is the new endpoint, and security controls must extend to govern what AI agents can and can't do autonomously.
- Ongoing monitoring and incident response. Building the post-deployment monitoring and incident detection capabilities needed to identify and report serious incidents. From build time to runtime, governance must be continuous.
The EU AI Act's requirements align closely with what good AI security looks like in practice: know your environment, classify by risk, apply controls proportionate to impact, and maintain continuous visibility into system behavior.
Organizations that invest in AI security and governance now will be better positioned to meet both the letter and the spirit of the regulation as full enforcement takes hold.
Agentic AI and the EU AI Act
The rise of agentic AI introduces new complexity for EU AI Act compliance. AI agents, systems that can take autonomous actions, execute multi-step workflows, access sensitive data, and interact with other systems on behalf of users, don't map neatly onto the static AI deployment models the regulation was originally designed around.
An AI agent used in recruitment, contract review, credit decisioning, or access control may qualify as a high-risk system under the EU AI Act, regardless of whether the organization thinks of it as an "agent" or a "workflow automation tool." The risk classification is determined by what the system does and the context in which it operates, not what it's called.
CISOs need to apply the same rigor to AI agent governance that they apply to privileged access management: define what each agent can do, enforce hard boundaries around consequential actions, log all agent activity, and ensure human oversight mechanisms are in place for decisions that affect individuals' rights, safety, or access to services.
From build time to runtime, the EU AI Act demands that organizations treat AI governance as an ongoing operational discipline, not a one-time compliance exercise. For agentic AI, that means governance controls that travel with the agent, adapt to context, and provide continuous visibility into what agents are doing and why.
Moving From Awareness to Readiness
The EU AI Act represents a generational shift in how AI systems are governed. For CISOs, it's both a mandate and an opportunity: a mandate to bring the same rigor to AI governance that security teams apply to other high-risk domains, and an opportunity to establish AI security as a strategic function within the organization.
The organizations that will be most prepared when full enforcement arrives in 2026 are those that have already built the foundational capabilities: comprehensive AI inventory, risk-based classification, vendor governance frameworks, technical documentation practices, and runtime controls that enforce policy from build time to runtime.
Securing AI agents across the enterprise, supply chain, and full lifecycle from development to production, isn't just good security practice. Under the EU AI Act, it's the law.
How Zenity Helps with EU AI Act Compliance
Zenity's AI agent security and governance platform provides the visibility, posture management, and runtime controls organizations need to meet the EU AI Act's requirements, from AI discovery and risk classification to continuous monitoring and enforcement of agent behavior at runtime.
Want to learn more about navigating agentic deployment under the EU AI Act? Register for our upcoming webinar on May 21, 2026.
All Academy PostsSecure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo