The EU AI Act is the world's first comprehensive law governing artificial intelligence. It became legally binding on August 1, 2024, and classifies AI systems by risk level — from banned systems to minimal-risk tools — with obligations that scale with the risk. If you deploy AI to EU users, build AI products, or use AI APIs in your business, the EU AI Act applies to you.
This guide explains what the law requires, who it covers, and what the key deadlines are — without the legal jargon.
What the EU AI Act Is
The EU AI Act (Regulation 2024/1689) is a regulation passed by the European Parliament and Council. Unlike a directive, it applies directly in all EU member states without needing national implementation legislation.
Its core logic: the higher the risk an AI system poses to people's rights, safety, or health, the more it is regulated.
The law covers AI systems — software that can generate outputs (predictions, recommendations, decisions, content) for a given set of objectives. It does not cover AI research, national security AI, or AI used solely for military or defense purposes.
The Four Risk Tiers
Tier 1: Unacceptable Risk — Banned
These AI systems are prohibited outright in the EU:
- Social scoring by governments or public authorities — assigning scores to citizens based on behavior, social status, or personal characteristics
- Real-time biometric identification in public spaces for law enforcement (with narrow exceptions)
- Emotion recognition in workplaces and educational institutions
- AI that exploits vulnerabilities of people based on age, disability, or social or economic situation
- Biometric categorization based on sensitive attributes (race, religion, political opinion, sexual orientation)
- Predictive policing based on profiling or personality traits rather than objective factual data
If your AI system does any of these, it cannot operate in the EU. This prohibition applied from February 2, 2025.
Tier 2: High Risk — Heavily Regulated
High-risk AI systems can operate, but only after meeting significant requirements before deployment. The law defines high-risk AI in two ways:
1. AI safety components of regulated products — AI embedded in products already subject to EU safety regulation: machinery, medical devices, vehicles, aviation, toys, elevators, and others. These follow the existing product safety regulation timelines.
2. AI systems in Annex III domains — eight sectors where AI decisions significantly affect individuals:
- Critical infrastructure (energy, water, transport, digital infrastructure)
- Education (admissions, assessment, evaluation of learners)
- Employment and HR (recruitment, screening, performance evaluation, promotion, task allocation)
- Essential services (credit scoring, life and health insurance, emergency services)
- Law enforcement (risk assessment, polygraphs, crime analytics, evidence assessment)
- Migration and asylum (risk assessment, verification of documents, examination of applications)
- Administration of justice (legal research, dispute resolution)
- Democratic processes (influencing elections)
If your AI system substantially influences decisions in these domains, it is high-risk.
What high-risk systems must do:
- Implement a risk management system and maintain it throughout the lifecycle
- Conduct data governance — document training data, test for bias
- Produce technical documentation before deployment
- Maintain an event log for traceability
- Ensure transparency — inform deployers what the system does and its limitations
- Implement human oversight mechanisms
- Achieve accuracy, robustness, and cybersecurity standards
- Register in the EU AI database before deployment
- Undergo a conformity assessment (self-assessment for most, third-party for biometric systems)
The full high-risk requirements apply from August 2, 2026.
Tier 3: Limited Risk — Disclosure Required
AI systems with specific transparency risks. The primary example: chatbots and AI-generated content.
- AI systems that interact with humans must disclose they are AI (unless obvious from context)
- AI-generated images, video, audio, or text intended to influence public opinion must be labeled as AI-generated (deepfake disclosure)
- Emotion recognition systems must disclose their use to those affected
These rules apply from August 2, 2026.
Tier 4: Minimal Risk — No Obligations
The vast majority of AI systems: spam filters, AI in video games, AI-powered recommendation systems (Netflix, Spotify), AI writing assistants used for personal productivity, AI grammar checkers. These have no mandatory obligations under the EU AI Act.
Providers of minimal-risk systems may voluntarily adopt a code of conduct. No enforcement mechanism applies.
General-Purpose AI Models (GPAI)
The EU AI Act creates a separate category for foundation models — AI models trained on large datasets that can perform many tasks: GPT-4, Claude, Gemini, Llama, Mistral, and similar.
All GPAI providers must:
- Maintain technical documentation
- Comply with EU copyright law for training data
- Publish a detailed summary of training data
- Implement a policy for respecting copyright opt-outs
GPAI models with systemic risk — defined as those trained with more than 10²⁵ floating-point operations (roughly the scale of GPT-4 and above) — must additionally:
- Conduct adversarial testing (red-teaming)
- Report serious incidents to the European AI Office
- Ensure cybersecurity protections
- Report on energy consumption
GPAI model rules applied from August 2, 2025.
Who Enforces the EU AI Act
European AI Office — oversees GPAI models and the consistent application of the regulation across member states. Created in early 2024 within the European Commission.
National competent authorities — each EU member state designates one or more authorities to supervise the AI Act at the national level. Germany has designated the Federal Network Agency; France has CNIL and ANSSI jointly; the UK has the AI Safety Institute (though the UK is not subject to the EU AI Act post-Brexit).
Penalties:
- Unacceptable risk violations: up to €35 million or 7% of global annual turnover
- High-risk system violations: up to €15 million or 3% of global annual turnover
- Providing incorrect information to authorities: up to €7.5 million or 1.5% of global annual turnover
For SMEs and startups, fines are calculated on the lower of the fixed amount or the percentage of turnover.
Key Dates at a Glance
| Date | What Applies |
|---|---|
| August 1, 2024 | Law enters into force |
| February 2, 2025 | Prohibited AI systems must comply |
| August 2, 2025 | GPAI model rules apply |
| August 2, 2026 | High-risk AI system requirements, limited-risk transparency rules |
| August 2, 2027 | High-risk AI in Annex I regulated products (products already under EU safety law) |
What This Means for Small Teams
Most small teams are not affected by the high-risk tier. If you use AI for internal productivity (writing, summarizing, coding), customer support chatbots, or content generation, you are in the minimal-risk or limited-risk tier. Your primary obligation is disclosure: tell users when they are talking to AI.
You are affected by the high-risk tier if your product does any of the following:
- Screens or ranks job applicants
- Scores creditworthiness or insurance risk
- Makes decisions about access to essential public services
- Evaluates students for admission or assessment
- Processes biometric data for identification
If that describes your AI system, the August 2026 deadline is your planning horizon. You need a conformity assessment, a risk management system, technical documentation, and an EU AI database registration before deployment in the EU.
You are affected by the GPAI rules if you are developing a foundation model — training on large datasets to produce a general-purpose model. Startup teams building fine-tuned models on top of foundation models are generally not GPAI providers; they are deployers subject to the high-risk rules if their fine-tuned model is used in a high-risk domain.
You are affected by transparency rules regardless of risk tier if you run a customer-facing chatbot. The chatbot must disclose it is an AI system. This is already standard practice, but it must be in place by August 2026.
How the EU AI Act Differs from GDPR
GDPR regulates how you handle personal data. The EU AI Act regulates how you build and deploy AI systems. They overlap but are not the same law.
GDPR still applies when your AI processes personal data — which most AI systems do. The EU AI Act adds requirements on top, not instead of, GDPR. A high-risk AI system that processes personal data must comply with both.
The biggest practical difference: GDPR is about data rights (consent, access, deletion). The EU AI Act is about system design (risk assessment, human oversight, documentation). You need both.
Next Steps
If you are trying to assess whether your AI system is high-risk, work through the EU AI Act Annex III High-Risk Checklist — it maps each of the eight Annex III domains to specific system types.
For a full compliance roadmap for small teams, see the EU AI Act Compliance Guide for Small Teams, which covers the complete requirements in deployment order.
If you build on top of a GPAI model (OpenAI, Anthropic, Google), see GPAI Obligations Under the EU AI Act to understand what your API provider must provide and what you inherit as the deployer.
