Your AI system is high-risk under the EU AI Act if it falls into one of the eight Annex III categories and influences decisions that affect people's rights, access to services, or significant outcomes. The determination is use-case specific, not technology specific — the same AI model can be high-risk in one context and not high-risk in another. Work through the 12 questions below to classify each AI system you deploy.
At a glance: If any answer in this checklist is yes, your AI system is likely high-risk under Annex III. High-risk classification means: conformity assessment, Annex IV technical documentation, human oversight mechanism, and EU registration required — all before August 2, 2026. Deployers of third-party AI systems have the same classification obligation as developers.
How to Use This Checklist
Answer each question for a specific AI system and deployment context. A general-purpose model used for two different tasks may require two separate assessments. Document your answers — the classification reasoning is itself part of Annex IV technical documentation.
Important: The questions below reflect the Annex III categories as they stand under the current EU AI Act. The Digital Omnibus negotiations may narrow certain categories, but the August 2026 deadline applies to the law as it exists now. Complete your assessment against current obligations, not proposed amendments.
Section 1: Biometric Identification and Categorization
Q1. Does your AI system identify individuals by their biometrics in real time in publicly accessible spaces?
Real-time biometric identification in public spaces (face recognition, gait analysis) by law enforcement is prohibited under Article 5. For private entities, real-time biometric identification in public spaces for purposes other than narrow security exceptions is high-risk under Annex III.
Answer yes if: Your system identifies or verifies individuals in real-time using biometric data in any space accessible to the public.
Not in scope: Biometric identification used strictly for user authentication in a controlled private system (e.g., Face ID to unlock a phone).
Q2. Does your AI system categorize individuals based on biometric data to infer protected characteristics?
Categorization of individuals into groups based on biometric data to infer race, political opinions, religious beliefs, sexual orientation, or trade union membership is high-risk. This includes systems that infer emotional states from biometric signals.
Answer yes if: Your system uses facial expressions, voice patterns, or other biometric signals to assign individuals to categories related to protected characteristics.
Section 2: Critical Infrastructure
Q3. Does your AI system manage or influence safety-critical components of energy, water, gas, heating, or transport networks?
AI used to manage, monitor, or optimize critical infrastructure — power grids, water treatment, gas distribution, heating networks, traffic management — is Annex III high-risk when failure or malfunction could cause significant harm.
Answer yes if: Your AI system makes or influences operational decisions in any critical infrastructure sector, including automated fault detection that triggers shutdowns or rerouting.
Section 3: Education and Vocational Training
Q4. Does your AI system determine or influence access to educational institutions or vocational training?
AI used in admissions decisions, placement decisions, or certification processes for educational programs is high-risk. This includes systems that score applications, rank candidates, or flag individuals for rejection.
Answer yes if: Your AI system influences who is admitted to, continues in, or graduates from an educational or vocational training program — even if a human makes the final decision.
Common examples: Automated essay scoring used in admissions, student risk prediction models that affect academic progression, AI that screens vocational training applicants.
Q5. Does your AI system monitor, assess, or detect prohibited behavior by students?
AI used to monitor student behavior, detect cheating, or enforce academic integrity rules is high-risk when it influences outcomes with significant effects (suspension, expulsion, grade invalidation).
Answer yes if: Your AI system monitors students during exams, detects academic dishonesty, or flags student behavior for disciplinary review.
Section 4: Employment, Worker Management, and Access to Self-Employment
Q6. Does your AI system influence recruitment, selection, promotion, or termination decisions?
AI used in HR decisions affecting employment status is among the most clearly defined Annex III high-risk categories. This includes resume screening, candidate ranking, interview analysis, performance evaluation, and workforce reduction tools.
Answer yes if: Your AI system processes applications, scores candidates, recommends hiring decisions, evaluates employee performance, or identifies candidates for termination or promotion.
Important for deployers: If you use a third-party HR AI tool (such as an ATS with AI scoring, a video interview analysis tool, or an AI performance management platform), you are a deployer with Article 26 obligations. The vendor's high-risk classification does not substitute for your own assessment. See HR AI governance requirements for the full deployer checklist.
Q7. Does your AI system monitor or evaluate worker behavior, output, or compliance at scale?
AI used for continuous worker monitoring — tracking keystrokes, measuring output, monitoring location, analyzing communication patterns — is high-risk when used to make decisions affecting employment or to enforce compliance.
Answer yes if: Your AI system continuously monitors employee activity and produces outputs used in performance evaluation, disciplinary action, or employment decisions.
Section 5: Access to Essential Private Services and Benefits
Q8. Does your AI system influence creditworthiness assessment or credit scoring?
AI used in consumer credit decisions — loan applications, credit limit determinations, interest rate decisions — is high-risk under Annex III. This applies to any AI that processes data about individuals to assess their creditworthiness.
Answer yes if: Your AI system scores loan applicants, determines credit limits, prices credit products, or flags accounts for review based on behavioral or financial data. See fintech AI governance obligations for specific requirements.
Q9. Does your AI system influence access to health insurance, life insurance, or public benefit services?
AI used to determine eligibility, pricing, or coverage for insurance products or to determine eligibility for public benefits is high-risk when the decisions have significant effects on individuals.
Answer yes if: Your AI system processes individual data to make or recommend decisions about insurance coverage, benefit eligibility, or claims processing for health, life, or disability insurance.
Section 6: Law Enforcement
Q10. Does your AI system support law enforcement decisions — profiling, evidence assessment, risk prediction, or resource deployment?
AI used to support law enforcement activities — predictive policing, risk assessment tools used in criminal proceedings, lie detection, crime prediction — is high-risk. This applies to systems used by private security organizations as well as public law enforcement.
Answer yes if: Your AI system is used to assess risk of criminal behavior, support criminal investigations, evaluate evidence reliability, or guide law enforcement resource deployment.
Section 7: Migration, Asylum, and Border Control
Q11. Does your AI system influence migration, asylum, or visa decisions?
AI used to assess visa applications, asylum claims, or border crossing requests is high-risk when the outputs influence decisions about admission, rejection, or risk classification of individuals.
Answer yes if: Your AI system processes data about individuals to support decisions about their right to enter, remain in, or be returned from a country.
Section 8: Administration of Justice and Democratic Processes
Q12. Does your AI system assist in legal interpretation, dispute resolution, or influence democratic processes?
AI used to assist courts, tribunals, or democratic institutions — case outcome prediction, legal research tools used to influence decisions, election-related AI systems — is high-risk.
Answer yes if: Your AI system assists in legal interpretation, dispute resolution with binding outcomes, or is used in election-related processes that influence democratic participation.
What Your Answers Mean
All answers are No: Your AI system does not fall under Annex III high-risk categories based on the information assessed. Document this finding with the date and rationale — the classification memo is evidence of due diligence. Revisit the assessment if your use case changes.
One or more answers are Yes: Your AI system is likely high-risk under Annex III. Complete the following before August 2, 2026:
- Conformity assessment — self-assessment for most systems; third-party assessment required for AI in law enforcement, biometric identification, and critical infrastructure.
- Annex IV technical documentation — documentation of purpose, training data, performance metrics, human oversight, monitoring, and transparency measures.
- Human oversight mechanism — a named person with the ability to monitor, override, and shut down the system; documented in the technical file.
- EU AI Act registration — high-risk AI systems must be registered in the EU database before deployment.
- Post-market monitoring — a process for tracking performance, collecting incidents, and updating the system in response to findings.
Classification Edge Cases
The system is third-party AI I use through an API or SaaS product. You are a deployer. Article 26 imposes independent obligations on deployers of high-risk AI, including: verifying the system has an EU Declaration of Conformity, ensuring use complies with the developer's instructions, implementing human oversight, and informing affected individuals. Contact your vendor to obtain their technical documentation and conformity declaration. If they cannot provide it, treat that as a risk signal.
The system informs decisions but a human makes the final call. Human-in-the-loop does not automatically remove high-risk classification. If the human routinely follows the AI recommendation without independent assessment, regulators may treat the process as effectively automated. Document the human oversight mechanism specifically: what information does the human reviewer receive, how long do they have to review, and what rate of overrides is occurring.
The system is used for internal decisions only, not consumer-facing. Internal use does not remove Annex III classification. Employment AI (Q6, Q7) explicitly covers workforce management. Many Annex III categories apply to decisions affecting employees, not just customers.
The system was deployed before the EU AI Act came into force. The August 2026 deadline applies to systems already in deployment. Existing high-risk systems have until August 2, 2026 to achieve compliance — they are not grandfathered.
Implementation Timeline for High-Risk Systems
| Milestone | When |
|---|---|
| Complete Annex III classification for all AI systems | Immediately |
| Identify vendor conformity documentation for third-party AI | Within 2 weeks |
| Begin Annex IV technical documentation drafting | Within 4 weeks |
| Human oversight mechanism documented | Within 6 weeks |
| EU database registration | Before August 2, 2026 |
| Post-market monitoring process active | By August 2, 2026 |
Full Compliance Checklist
- Annex III classification completed for every AI system in use
- Classification reasoning documented with date and rationale
- For systems classified as high-risk: vendor EU Declaration of Conformity obtained or requested
- Annex IV technical documentation drafted (purpose, training data, performance metrics, transparency)
- Human oversight mechanism documented (named oversight person, override procedure, monitoring cadence)
- EU AI Act database registration completed (or scheduled before August 2, 2026)
- Post-market monitoring process active with incident reporting path
- Affected individuals informed of AI decision-making where required
- Quarterly review scheduled to reassess classification as use cases evolve
Related Guidance
- EU AI Act compliance guide for small teams — complete checklist
- EU AI Act GPAI obligations: ChatGPT, Claude, Gemini
- EU AI Act digital omnibus deadline extension — what changed
- HR AI governance: EU AI Act high-risk hiring tools
- Fintech AI governance: credit scoring and CFPB requirements
- Healthcare AI governance: HIPAA, EU AI Act, and FDA SaMD
Classification is use-case specific. Consult legal counsel for binding determinations. This checklist reflects Annex III as in force under EU AI Act (Regulation 2024/1689). The Digital Omnibus may modify certain provisions; monitor EUR-Lex for final text.
