The EU AI Act's high-risk provisions for AI used in employment, healthcare, financial services, and education became enforceable on August 2, 2026. For small teams that deploy AI in any of these categories for EU residents — regardless of where your company is based — compliance is now a legal requirement, not a future planning item.
At a glance: EU AI Act applies to non-EU companies if their AI affects EU residents. High-risk classification is automatic for AI in hiring, credit scoring, clinical decision support, and education — listed in Annex III. Required steps before August 2026: classify your AI tools, obtain vendor EU Declarations of Conformity, implement human oversight, add individual notice language, and register high-risk systems. The EU Digital Omnibus deadline extension has not been enacted — August 2026 applies.
This guide covers what the EU AI Act requires, who is in scope, what the risk tiers mean, and what each compliance obligation means in practice for a small team without a dedicated legal department.
Who Is in Scope for the EU AI Act
The EU AI Act applies to:
- Providers: companies that develop or place AI systems on the EU market (AI vendors and developers)
- Deployers: companies that use AI systems in a professional context that affects EU residents (employers, lenders, healthcare providers, SaaS companies with EU customers or employees)
- Importers and distributors: companies that bring non-EU AI products into the EU market
Territorial scope: The EU AI Act is extraterritorial. If you are based in the US, UK, or elsewhere but:
- Deploy AI to EU customers
- Use AI tools that affect EU employees
- Process EU resident data through AI for consequential decisions
…you are in scope for the relevant provisions.
What this means for small US/UK/global teams:
- A fintech startup offering AI credit decisions to EU customers: in scope for Annex III credit scoring provisions
- A startup using AI hiring tools to screen EU-based job applicants: in scope for Annex III employment provisions
- A health tech company using AI clinical tools for EU patients: in scope for Annex III safety component provisions
- A team using an internal AI writing assistant that has no EU-facing function: generally out of scope for high-risk provisions
The Four Risk Tiers
The EU AI Act classifies AI into four risk levels, each with different obligations:
Unacceptable Risk — Prohibited
These AI applications are banned entirely in the EU:
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- AI systems that exploit vulnerabilities of specific groups (age, disability, social/economic situation)
- Social scoring systems used by public authorities
- AI that manipulates human behavior below conscious awareness
- AI used to infer political opinions, religious beliefs, or sexual orientation from biometric data
- AI-based predictive policing based purely on profiling
No conformity path exists for prohibited AI — deployment is illegal.
High Risk — Full Compliance Required
AI systems listed in Annex III are automatically high-risk. The full compliance framework applies before deployment.
Annex III categories:
| Category | Examples |
|---|---|
| Biometric identification | Real-time remote biometric ID, emotion recognition |
| Critical infrastructure | AI managing electricity grids, water supply, traffic |
| Education | AI that determines access to education, assesses students |
| Employment | AI used in hiring, promotion, termination, task assignment, monitoring |
| Essential services | AI for credit scoring, insurance risk, social benefits eligibility |
| Law enforcement | AI for risk assessment of individuals, polygraph tests |
| Migration and border | AI for assessing visa applications, detecting threats |
| Administration of justice | AI to assist courts in researching facts or law |
Medical devices and safety components of products covered by other EU legislation (EU MDR, machinery directive) are also high-risk.
Limited Risk — Transparency Obligations Only
AI systems with interaction-based risks must disclose that they are AI:
- Chatbots must tell users they are talking to AI
- AI-generated images/video/audio that could be mistaken for real must be labeled (deepfakes)
- AI that generates text published for the public must disclose AI generation
For most internal business AI tools in this category, the primary obligation is ensuring users know when they are interacting with AI.
Minimal Risk — No Specific Requirements
Most AI tools fall here: spam filters, AI-powered search, recommendation systems, AI writing assistants used internally. No specific EU AI Act obligations for minimal-risk AI, though GDPR and other legislation still applies.
What High-Risk Compliance Requires
For each high-risk AI system, the following obligations apply before EU deployment:
1. Conformity Assessment
The provider (AI vendor) must conduct a conformity assessment demonstrating the system meets EU AI Act requirements. For deployers (companies using the AI tool), the obligation is to obtain and verify the vendor's documentation.
What to request from vendors:
- EU Declaration of Conformity
- Technical documentation (model description, training data overview, performance metrics)
- Instructions for use (including known limitations and appropriate deployment conditions)
- Bias testing results and methodology
- Log of conformity assessment
If a vendor cannot provide an EU Declaration of Conformity for a system you want to deploy in the high-risk category, that system is not EU AI Act compliant. You cannot deploy it for high-risk purposes under EU law.
2. Risk Management System
High-risk AI providers must maintain an ongoing risk management system throughout the lifecycle of the system:
- Identify and analyze known and foreseeable risks
- Test and evaluate the system against those risks
- Adopt risk mitigation measures
- Residual risks are communicated to users/deployers
For deployers, this means: document the risks specific to your deployment context, add deployment-specific mitigations (additional human checks, restricted use cases), and review risk documentation when the vendor updates the system.
3. Data Governance
Training, validation, and testing data for high-risk AI must be:
- Relevant, representative of the intended use case
- Free from errors to the degree possible
- Appropriate for the geographic, contextual, and demographic scope of deployment
- Screened for known biases
Deployers are not responsible for the vendor's training data — but you are responsible for ensuring the system is appropriate for your specific deployment context, including demographic representativeness for your affected population.
4. Technical Documentation
Providers must maintain technical documentation covering:
- Description of the AI system and its intended purpose
- Description of the components and architecture
- Training methodology and data used
- Performance metrics and evaluation results
- Known limitations and foreseeable misuse
- Changes made to the system over its lifecycle
As a deployer, request this documentation from vendors. The absence of technical documentation is a red flag — either the vendor has not completed it, or they are unwilling to share it.
5. Human Oversight
High-risk AI systems must be designed to allow effective human oversight. This is not a procedural checkbox — it means:
- The system can be monitored during operation
- Humans can intervene, override, or stop the system
- Affected individuals can request human review of AI-assisted decisions
- Human reviewers have access to the relevant data and model reasoning, not just the AI's output
A "human in the loop" who only sees the AI score without access to the underlying data or reasoning does not satisfy the human oversight requirement. The human must be able to exercise independent judgment.
6. Accuracy, Robustness, and Cybersecurity
High-risk AI must achieve appropriate levels of accuracy for the intended purpose, be robust against errors and inconsistencies, and be resilient to attempts to manipulate its outputs.
For deployers, this means: test the AI in your specific context before deployment, document acceptable accuracy thresholds, and establish monitoring to detect performance degradation.
7. Transparency and Individual Notice
Deployers of high-risk AI must:
- Inform affected individuals that they are subject to a high-risk AI system decision
- Provide a meaningful explanation of the decision
- Inform affected individuals of their right to request human review
This obligation applies at the point of the decision — not buried in terms of service. The notice must be understandable to a person without AI technical knowledge.
8. Post-Market Monitoring
After deployment, providers must monitor real-world performance and report serious incidents. Deployers must cooperate with post-market monitoring obligations and report serious incidents to providers promptly.
9. EU AI Database Registration
Before placing a high-risk AI system on the EU market or putting it into service, it must be registered in the EU AI public database. Registration includes: system description, intended purpose, geographic scope, and responsible party contact information.
The EU AI database is publicly searchable. Registration is not optional — it is a prerequisite for compliant deployment of high-risk systems.
General-Purpose AI Models (GPAI)
The EU AI Act has separate provisions for General-Purpose AI (GPAI) models — large foundation models used for a wide range of tasks (GPT-4, Claude, Gemini, Llama, etc.).
GPAI providers (OpenAI, Anthropic, Google, Meta, etc.) must:
- Provide technical documentation about training data, energy use, and capabilities
- Comply with EU copyright law (training data transparency)
- Publish model evaluation results
GPAI models with "systemic risk" (estimated training compute > 10²⁵ FLOPs — current threshold) have additional obligations including adversarial testing, incident reporting, and cybersecurity measures.
What this means for deployers using GPAI APIs: The GPAI obligations fall on the model provider, not on you as a deployer. Your obligation is to use the GPAI in a way that is consistent with the provider's intended use and documentation. If you are building a high-risk AI application on top of a GPAI model, your application may be high-risk even if the underlying model is not.
Prohibited AI — Know What You Cannot Build
Before deploying any AI system, confirm it is not in the prohibited category. Prohibited AI is banned regardless of safeguards or documentation:
Real-time remote biometric ID: Using AI to identify people from live video feeds in public spaces is prohibited for law enforcement except in narrow circumstances (terrorism, abduction of a child, prosecution of specific crimes, subject to judicial authorization). For commercial uses, this prohibition is substantially broader.
Emotion recognition in the workplace and education: AI systems that infer emotional states of workers or students are prohibited. Using facial recognition or physiological signal analysis to assess employee engagement, stress, or attitude is a prohibited application.
Social scoring: Any AI system used to evaluate or classify individuals based on their social behavior and assign them a score that affects their access to services, opportunities, or public life is prohibited.
Manipulative AI: AI that uses subliminal techniques to distort behavior, or that exploits psychological vulnerabilities to manipulate users into decisions against their interests, is prohibited.
If any of your AI tools' use cases could fall into these categories, get legal review before deploying.
Compliance Timeline and Deadlines
| Obligation | Enforcement Date |
|---|---|
| Prohibited AI ban | February 2, 2025 |
| GPAI model obligations | August 2, 2025 |
| High-risk AI obligations (Annex III) | August 2, 2026 |
| High-risk AI for regulated products | August 2, 2027 |
| EU AI Office enforcement operations | Ongoing from 2025 |
Important: The EU Digital Omnibus proposal would extend the August 2026 deadline to August 2027 for many high-risk AI systems. As of April 2026, this proposal has not been signed into law. The EU AI Act deadline extension guide tracks the legislative status. The safe assumption for compliance planning: August 2026 is the operative deadline.
Colorado AI Act: For US teams, the Colorado AI Act SB 24-205 uses similar high-risk categories with a June 30, 2026 initial compliance deadline — a month before the EU AI Act. The Colorado AI Act compliance guide covers the required transparency statement and individual notice templates.
High-Risk AI by Sector
Employment AI
AI used in hiring, promotion, termination, task assignment, or worker monitoring is automatically high-risk. This classification applies regardless of whether the decision is fully automated or AI-assisted.
What you must do before deploying hiring AI:
- Obtain vendor EU Declaration of Conformity
- Implement candidate disclosure in job postings
- Establish a mechanism for candidates to request human review
- Document bias testing results for your applicant pool
- Ensure human reviewers can meaningfully override AI recommendations
Full implementation guide: HR AI Governance: EU AI Act and EEOC Requirements
Healthcare AI
AI used as a safety component of a medical device or in clinical decision support affecting patient care is high-risk. AI classified as a medical device under EU MDR/IVDR has additional obligations.
What you must do before deploying clinical AI:
- Classify the AI system against both EU AI Act Annex III and EU MDR criteria
- Obtain vendor conformity documentation for each framework
- Implement clinician override mechanisms
- Document performance metrics by patient demographic subgroup
Full implementation guide: Healthcare AI Governance: HIPAA, EU AI Act, and FDA Requirements
Financial Services AI
AI used in credit scoring, insurance risk assessment, or determining access to financial services is high-risk (Annex III Section 5b). This applies to consumer lending AI, BNPL decisioning, mortgage underwriting AI, and insurance pricing models.
What you must do before deploying credit AI:
- Classify your credit AI against Annex III Section 5b
- Obtain vendor conformity documentation and bias testing results
- Implement applicant notification that AI was used
- Provide meaningful explanation of AI credit decisions (specific reasons — same requirement as CFPB adverse action)
- Implement human review mechanism for applicants who request reconsideration
Full implementation guide: Fintech AI Governance: CFPB, FCRA, and EU AI Act Requirements
EU AI Act vs. Other Frameworks
The EU AI Act operates alongside, not instead of, other applicable regulations:
| Framework | What it covers | Relationship to EU AI Act |
|---|---|---|
| GDPR | Personal data processing | AI processing personal data must satisfy both; EU AI Act does not replace GDPR |
| EU MDR/IVDR | Medical devices | AI medical devices must comply with both; EU MDR is more specific for device safety |
| DORA (financial sector) | ICT risk management for financial entities | AI used in financial systems may need to satisfy both |
| NIS2 | Cybersecurity for critical infrastructure | AI in critical infrastructure must satisfy both |
| Colorado AI Act | US state law for high-risk AI | Similar categories; earlier deadline (June 30, 2026) |
For the comparison of EU AI Act and NIST AI RMF — and how to satisfy both with a single documentation set — see the EU AI Act vs NIST AI RMF guide.
What to Ask Your AI Vendors Right Now
For every AI tool your team uses in a high-risk category, request the following in writing before August 2026:
-
"Is this AI system classified as high-risk under EU AI Act Annex III?" — The vendor should be able to answer this definitively.
-
"Can you provide your EU Declaration of Conformity for this system?" — Required for high-risk systems placed on the EU market.
-
"What bias testing have you conducted, across which demographic groups, with what results?" — Not just a "we test for bias" claim — actual methodology and results.
-
"How does the system support human oversight and the right to request human review?" — The mechanism must exist at the vendor level before you can implement it.
-
"What individual notification do you support for affected individuals?" — You need to inform individuals that AI was used; the vendor should have templates or support for this.
-
"Is this system registered in the EU AI database?" — Required before EU deployment; ask for the registration ID.
If a vendor cannot answer questions 1–3 affirmatively, they are not compliant for high-risk EU deployment and you cannot legally use them in that context after August 2026.
Use the AI vendor due diligence checklist for the full 30-question review framework, including EU AI Act-specific questions for each category.
EU AI Act Compliance Checklist
Classification (Do First)
- List every AI tool in use, including embedded AI in SaaS products
- For each tool: does it affect EU residents? If no, likely out of scope for high-risk provisions
- For each tool: does it fall into any Annex III category (employment, healthcare, financial services, education, critical infrastructure, law enforcement, border control, biometrics)?
- For each Annex III match: confirm high-risk classification
- For GPAI models used (OpenAI API, Claude API, etc.): confirm you are using them within the provider's intended use documentation
Vendor Documentation (High-Risk Tools)
- Request EU Declaration of Conformity from each high-risk AI vendor
- Request technical documentation: model description, training data, performance metrics, limitations
- Request bias testing methodology and results
- Request instructions for use and known limitations
- Confirm vendor has registered the system in the EU AI database
- Document vendor responses and retain for audit
Deployment Controls (High-Risk Tools)
- Implement individual notice at point of AI-assisted decision
- Implement human review mechanism for affected individuals
- Confirm human reviewers have access to data and reasoning, not just AI output
- Document accuracy thresholds and monitoring plan
- Train relevant staff on human oversight obligations
Prohibited AI Check
- Confirm no deployed AI system falls into a prohibited category
- Specifically verify: no real-time biometric ID in public spaces, no emotion recognition in workplace/education, no social scoring, no manipulative AI
Ongoing Obligations
- Set up post-market monitoring for high-risk systems
- Establish serious incident reporting process (to vendor and EU AI Office if applicable)
- Schedule annual review of Annex III classifications as AI use expands
- Monitor EU AI Office enforcement guidance for interpretive updates
References
- EU AI Act — Full text: artificialintelligenceact.eu
- EU AI Act — Annex III: High-risk AI system categories
- EU AI Act — Article 6: Classification rules for high-risk AI systems
- EU AI Act — Article 9: Risk management system
- EU AI Act — Article 13: Transparency and information to deployers
- EU AI Act — Articles 49–51: GPAI model obligations
- EU AI Database: euaidb.eu (registration portal for high-risk systems)
- Related: EU AI Act Deadline Extension — What to Do Now — legislative status of the Digital Omnibus proposal and what to defer vs. complete now
- Related: EU AI Act vs NIST AI RMF — how to build one compliance program that satisfies both frameworks
- Related: Colorado AI Act Compliance Deadline 2026 — US equivalent with June 30, 2026 deadline; required transparency statement and notice templates
- Related: HR AI Governance — Annex III employment category: conformity requirements, EEOC disparate impact, candidate disclosure templates
- Related: Healthcare AI Governance — Annex III safety components category: HIPAA, FDA SaMD, and EU AI Act clinical AI
- Related: Fintech AI Governance — Annex III financial services category: CFPB, FCRA, and EU AI Act credit scoring
- Related: AI Vendor Due Diligence Checklist — 30-question framework including EU AI Act-specific vendor questions
- Related: AI Governance for Small Teams: Complete Guide — the full governance framework that EU AI Act compliance fits into
