Answer these 10 questions in order. Stop at the first "Yes" — that determines your tier.
Question 1: Does your AI system do something the EU AI Act explicitly bans?
The EU AI Act prohibits specific AI practices outright. These are banned for all providers and deployers, effective February 2, 2025.
Banned AI practices:
- Real-time biometric identification of people in public spaces (law enforcement exception applies in narrow cases)
- Social scoring systems — using AI to rate individuals based on social behavior or personal characteristics for government or equivalent purposes
- AI that exploits psychological vulnerabilities, age-related weaknesses, or disabilities to manipulate behavior in harmful ways
- Predictive policing systems targeting individuals based on profiling, not actual behavior
- Emotion recognition in workplaces or educational institutions (exceptions for medical and safety purposes)
- AI that infers sensitive attributes (race, political opinion, sexual orientation) from biometric data in most contexts
If yes → Outcome: UNACCEPTABLE RISK
You cannot legally deploy this system for EU residents. Period. The penalty for deploying a prohibited AI system is up to €35 million or 7% of global annual turnover under Article 99.
If no → Continue to Question 2.
Question 2: Does your AI system appear on the Annex III High-Risk List?
Annex III lists the specific AI application areas that are automatically classified as high-risk. This is not a judgment call — if your system does what Annex III describes, it is high-risk.
Annex III High-Risk categories:
| Annex III area | Examples |
|---|---|
| Biometric identification | Face recognition, gait analysis, fingerprint matching |
| Critical infrastructure | AI managing power grids, water systems, transport networks |
| Education | AI for student admissions, scoring, assessments, monitoring |
| Employment | AI resume screening, job matching, promotion/termination decisions |
| Essential services | AI credit scoring, insurance risk rating, mortgage eligibility |
| Law enforcement | AI predicting criminal risk, evidence analysis, person identification |
| Migration and border control | AI assessing asylum applications, border risk scoring |
| Administration of justice | AI assisting judicial decisions, prioritizing cases |
If your AI system is used in any of these areas for EU residents → Outcome: HIGH-RISK
If no → Continue to Question 3.
Question 3: Is your AI system a safety component of a regulated product?
If your AI is embedded in a product already regulated under EU law — a medical device, vehicle, aviation system, marine equipment, or similar — and performs a safety function, it is high-risk regardless of Annex III.
Examples:
- AI in a pacemaker or insulin pump that adjusts dosage
- AI in a vehicle's collision avoidance or autonomous driving system
- AI in industrial machinery that controls safety shutoffs
If yes → Outcome: HIGH-RISK
If no → Continue to Question 4.
Question 4: Is your AI a general-purpose AI model (GPAI)?
A general-purpose AI model is trained on large amounts of data for a wide range of tasks — the foundation models like GPT-4, Claude, Gemini. If you are building or fine-tuning a GPAI model, you have GPAI obligations (separate from the four risk tiers). If you are using a GPAI model via API, these obligations fall on the model provider, not you.
If you are building or fine-tuning a GPAI model: See EU AI Act GPAI rules — this self-assessment covers deployer obligations, not GPAI provider obligations.
If you are using a GPAI via API: Continue to Question 5.
Question 5: Does your AI system interact with people who might not know it is AI?
If your AI system:
- Communicates with users via text, voice, or video in a way that could be mistaken for human communication
- Generates synthetic images, audio, or video (deepfakes) presented as real content
- Creates content that could be mistaken for human-authored work in a context where origin matters
If yes → Outcome: LIMITED-RISK
You must disclose that the interaction is AI-generated. No conformity assessment required. See compliance steps below.
If no → Continue to Question 6.
Question 6: Is your AI system customer-facing (used by people outside your organization)?
AI systems used only internally by your own employees for productivity tasks — drafting emails, summarizing documents, generating code, analyzing data — are generally Minimal-Risk if they do not fall into Annex III categories.
AI systems deployed to external customers, users, or members of the public face more scrutiny, particularly if they influence decisions those people make.
If external and involves any consequential decision → review Questions 2–3 again carefully.
If internal only, no Annex III overlap → Continue to Question 7.
Question 7: Does your AI system affect access to opportunities, services, or rights?
If your AI influences (even partially) whether a person:
- Gets a job, promotion, or performance rating
- Receives a loan, insurance, or financial service
- Is admitted to an educational program
- Accesses government benefits or services
- Is subject to law enforcement scrutiny
...then this system is almost certainly Annex III high-risk, and Question 2 should have caught it. If you answered No to Question 2 and Yes here, re-read the Annex III table in Question 2 more carefully before proceeding.
Still no after re-checking → Continue to Question 8.
Question 8: Does your AI system process biometric data?
Biometric data includes facial images, fingerprints, voice patterns, iris scans, gait analysis, and behavioral characteristics that can identify individuals. If your system processes this data for identification or categorization purposes, it is almost certainly Annex III high-risk.
If yes → likely HIGH-RISK — re-check Question 2.
If no → Continue to Question 9.
Question 9: Are any EU residents affected by your AI system's outputs?
If your company has no EU customers, no EU employees, and your AI system's outputs are never seen by or used to make decisions about EU residents, the EU AI Act does not apply to your current deployment.
If no EU residents affected → Outcome: OUT OF SCOPE (for now)
Document this assessment. If your EU reach changes, re-run this assessment.
If EU residents are affected → Continue to Question 10.
Question 10: What does your AI actually do?
If you have reached Question 10, your system is not banned, not on the Annex III list, not a safety component, not a GPAI you are building, and does not interact with people in a deceptive way.
Likely outcome: MINIMAL-RISK
Most AI used for internal productivity lands here. Examples: AI writing assistants, AI code completers, AI that summarizes meeting notes, AI that generates marketing copy for human review, AI chatbots clearly labeled as AI.
No mandatory compliance obligations under the EU AI Act. Voluntary codes of conduct exist but are not required.
Your Outcome and Next Steps
Outcome A: UNACCEPTABLE RISK
You must stop deployment. Review the prohibited practice list in Question 1 and identify what aspect of your system triggers the ban. In most cases, the system needs to be redesigned or abandoned — not adjusted.
Immediate actions:
- Halt deployment to EU residents
- Document the assessment in your AI governance record
- If you have already deployed, consult legal counsel about remediation and disclosure obligations
Outcome B: HIGH-RISK
You must complete conformity assessment before August 2, 2026 (now in effect). The obligations are:
| Obligation | What it means in practice |
|---|---|
| Risk management system | Documented process for identifying and mitigating AI risks — must be updated throughout the lifecycle |
| Technical documentation | Full technical record of the system: purpose, architecture, training data, performance, limitations |
| Data governance | Training and validation data must be relevant, representative, and free from bias that could cause discrimination |
| Transparency | Users must receive information about system capabilities and limitations |
| Human oversight | A human must be able to monitor, intervene, and override the system |
| Accuracy and robustness | System must be tested for accuracy, resilience to errors, and consistency |
| Cybersecurity | System must be protected against adversarial attacks |
| EU registration | High-risk systems must be registered in the EU AI Act database |
| EU Declaration of Conformity | Provider must issue a declaration that the system meets all requirements |
Next steps:
- Use the EU AI Act August 2026 compliance checklist for a step-by-step implementation guide
- If you are the deployer (using a third-party AI tool), request the EU Declaration of Conformity from your vendor
- If you are the provider (building the AI), you must complete conformity assessment yourself
Outcome C: LIMITED-RISK
One disclosure obligation applies. AI chatbots must identify themselves as AI. Deepfakes must be labeled.
| System type | Required disclosure |
|---|---|
| AI chatbot or virtual assistant | Must inform users they are interacting with AI at the start of the interaction |
| Emotion recognition system | Must inform affected persons that the system is analyzing emotions |
| Deepfake video or audio | Must be labeled as artificially generated or manipulated |
| AI-generated text (some cases) | Must be labeled when presented as factual content in contexts affecting public discourse |
Implementation: Add a clear disclosure at the start of any AI-human interaction. A short banner ("You are chatting with an AI assistant") satisfies this obligation. No conformity assessment required.
Outcome D: MINIMAL-RISK
No mandatory obligations. You are encouraged (not required) to follow a voluntary code of conduct. For governance purposes, document this assessment result so you can demonstrate you considered EU AI Act applicability.
Best practice actions (not required):
- Record this assessment result in your AI tool register
- Set a reminder to re-run this assessment if the AI system's purpose or scope changes
- Apply basic data hygiene (don't send unnecessary personal data to AI tools)
Document Your Assessment
For any AI system you assess, record:
AI System: [Name / tool / model]
Assessment date: [Date]
Assessed by: [Name]
EU residents affected: Yes / No
Annex III review: [Which categories reviewed, which apply]
Risk tier outcome: [Unacceptable / High-Risk / Limited-Risk / Minimal-Risk / Out of Scope]
Compliance actions required: [List or N/A]
Next review date: [Date — recommend annually or when system changes]
This record is evidence of governance due diligence. For high-risk systems, it is also required as part of the technical documentation.
References
- EU AI Act (Regulation (EU) 2024/1689) — EUR-Lex
- Annex III high-risk AI systems list — Article 6 and Annex III of Regulation (EU) 2024/1689
- EU AI Act prohibited practices — Article 5 of Regulation (EU) 2024/1689
- GPAI obligations — Articles 51–55 of Regulation (EU) 2024/1689
- Related: EU AI Act compliance guide for small teams
- Related: EU AI Act August 2026 compliance checklist
