The EU AI Act August 2, 2026 deadline is 89 days away. Here is exactly what needs to be in place:
High-risk AI systems (Annex III) must complete before August 2, 2026:
| Requirement | What it means | Who is responsible |
|---|---|---|
| Risk management system | Documented process for identifying, evaluating, and mitigating AI risks — ongoing, not one-time | Provider or deployer |
| Technical documentation | Full documentation of system design, training data, performance benchmarks, limitations | Provider primarily |
| Conformity assessment | Self-assessment verifying compliance, signed EU declaration of conformity | Provider |
| EU AI database registration | Register the system publicly before deployment | Provider + deployer |
| Human oversight | Mechanisms allowing a human to monitor, intervene, and override the system | Deployer |
| Transparency to deployers | Provider must supply instructions on intended use, limitations, and oversight requirements | Provider |
Minimal-risk teams (most small teams): No mandatory obligations. If you run a customer-facing chatbot, disclose it is AI. That is all.
Already in effect (August 2025): General-purpose AI model (GPAI) rules — applies to providers of foundation models, not companies using them.
Step 1: Determine Your Risk Tier (15 Minutes)
Before doing anything else, confirm which tier you are in. The EU AI Act has four tiers — only one requires significant action by August 2026.
Are you high-risk (Annex III)? Your AI system is high-risk if it makes or substantially influences decisions in:
- Employment/HR: AI that screens resumes, ranks candidates, or evaluates employee performance
- Credit/essential services: AI that scores creditworthiness or risk for insurance, banking, or essential public services
- Education: AI that evaluates students for admission, assessment, or vocational training progression
- Biometric identification: AI that identifies people from images, voice, or behavioral data in real time
- Critical infrastructure: AI embedded in energy, water, transport, or digital infrastructure management
- Law enforcement: AI used to assess risk, predict crime, or evaluate evidence
- Migration and asylum: AI that evaluates applications or assesses travel document authenticity
- Democratic processes: AI that influences elections or political campaigns
If none of the above describes your AI use, you are likely minimal-risk. Your only obligation from August 2026 is chatbot disclosure.
The practical test: Does your AI system make or substantially influence a decision that has a significant effect on a person's access to employment, education, credit, or essential services? If yes, you are probably high-risk.
Step 2: Risk Management System
Deadline: In place before deployment, ongoing thereafter.
A risk management system under the EU AI Act is a documented, iterative process — not a one-time audit. It must cover the full lifecycle of your AI system.
Minimum required components:
- Risk identification — document potential harms your system could cause, including discrimination, privacy violations, and output errors
- Risk estimation and evaluation — assess likelihood and severity of each identified harm, accounting for the intended use population
- Risk control measures — document mitigations: data validation, human oversight steps, output review requirements, user notification procedures
- Residual risk assessment — after controls, document what risk remains and why it is acceptable
- Testing results — evidence of pre-deployment testing, including test data, metrics, and pass/fail thresholds
- Monitoring plan — how you will detect problems in production and who is responsible
Format: The EU AI Act does not specify a format. A structured document (Google Doc, Notion page, PDF) with version control and a named owner is sufficient for small teams.
Who owns it: For deployers using a third-party AI system (e.g., an AI hiring tool from a vendor), the vendor is responsible for the technical risk management. You, as the deployer, are responsible for documenting your use case, your oversight procedures, and your monitoring plan.
Step 3: Technical Documentation
Deadline: Completed before deployment.
Technical documentation is the evidence package that demonstrates your high-risk AI system meets EU AI Act requirements. For deployers using a third-party AI system, the provider should supply the core documentation — your responsibility is to supplement it with deployer-specific information.
What technical documentation must include (Article 11):
- General description of the system and its intended purpose
- Design specifications: architecture, development choices, algorithms
- Training, validation, and test data description — including steps to examine, clean, and filter data
- Performance benchmarks: accuracy metrics, failure rates, bias testing results
- Known and foreseeable risks
- Changes made to the system after initial conformity assessment
- EU declaration of conformity (signed)
For deployers relying on third-party AI systems: request this documentation from your vendor. If the vendor cannot supply it, that is a red flag for EU AI Act compliance.
Practical shortcut for small teams: If you are deploying an AI hiring or scoring tool from a vendor, ask the vendor directly:
"Can you provide your EU AI Act technical documentation package and EU declaration of conformity? We need this before deploying your system in the EU."
If the vendor does not have this ready by July 2026, assume their system is not compliant and evaluate alternatives.
Step 4: EU AI Database Registration
Deadline: Before deployment, and before August 2, 2026 for already-deployed systems.
All high-risk Annex III systems must be registered in the EU AI database before deployment in the EU. This is a public register maintained by the European AI Office.
What you register:
- System name and description
- System provider (name, address, contact)
- Intended purpose and use context
- Risk management system summary
- EU declaration of conformity reference
Who registers:
- Providers (companies that develop or deploy the AI system): register the system
- Deployers (businesses that use the system in their products or services): in some cases, register their specific deployment
Where: The EU AI Act database is accessible at database.ec.europa.eu/ai-office (check current URL — the database was launched in 2025).
Timeline: Registration processing can take 2-4 weeks. Submit by July 1, 2026 to have confirmation before the August 2 deadline.
Step 5: Human Oversight Mechanisms
Deadline: In place before deployment.
The EU AI Act requires that high-risk AI systems be designed to allow effective human oversight. This is an operational requirement — not just a policy statement.
What "human oversight" means under Article 14:
- A human must be able to understand the system's outputs well enough to detect anomalies
- A human must be able to ignore, override, or reverse the system's outputs
- A human must be able to intervene and halt the system's operation
- The system must be monitorable — the human must have visibility into what the system is doing
Practical implementation for small teams:
| High-risk use case | Minimum human oversight |
|---|---|
| AI resume screening | Human reviews shortlist; AI cannot reject without human confirmation |
| AI credit scoring | Human reviews final score; adverse action letter must include explanation |
| AI customer service with significant effects | Human escalation path available within one interaction |
| AI fraud detection | Human reviews all true-positive flags before action is taken |
Document the oversight procedure: for each high-risk AI use case, write a one-paragraph procedure explaining who oversees the AI output, how they review it, and what triggers escalation to a human decision.
Step 6: Conformity Assessment and EU Declaration of Conformity
Deadline: Before deployment.
For most Annex III high-risk systems, conformity assessment is a self-assessment — you do it yourself, without a third-party auditor.
Exception: biometric identification systems require third-party assessment.
What the self-assessment involves:
- Complete the technical documentation (Step 3)
- Verify your system meets the EU AI Act requirements for your specific Annex III category
- Complete the EU declaration of conformity — a signed document stating that the system complies with the EU AI Act
- Apply the CE marking if your system is embedded in a physical product subject to CE marking rules
EU declaration of conformity template (required content):
EU Declaration of Conformity
System name: [AI system name]
Provider: [Company name, address, contact]
This declaration is issued under the sole responsibility of [Company name].
The object of the declaration: [brief system description]
The AI system described above is in conformity with Regulation (EU) 2024/1689
(EU Artificial Intelligence Act), specifically the requirements applicable to
high-risk AI systems under Annex III, [specify relevant category].
Signed by: [Name, Title]
Date: [Date]
Place: [City, Country]
For deployers using third-party systems, the provider signs the declaration. Ask your vendor for a copy. If they cannot provide one, they are not compliant.
August 2026 Compliance Timeline
Work backward from August 2, 2026:
| Date | Action |
|---|---|
| Now — May 15 | Step 1: Confirm risk tier. If minimal-risk, stop — just implement chatbot disclosure. |
| May 15 – June 1 | Steps 2–3: Draft risk management system and technical documentation. Request vendor documentation. |
| June 1 – July 1 | Steps 4–5: Submit EU database registration. Implement and document human oversight procedures. |
| July 1 – July 15 | Step 6: Complete conformity assessment. Prepare and sign EU declaration of conformity. |
| July 15 – Aug 2 | Final review: verify all six steps are complete and documented. Audit-ready. |
| August 2, 2026 | Deadline. Systems without documentation are non-compliant. |
What Most Small Teams Actually Need to Do
If you use AI for internal productivity (writing, coding, research, summarizing):
- You are minimal-risk. No obligations under the EU AI Act except chatbot disclosure (if applicable).
- Action: none beyond disclosure.
If you run a customer-facing chatbot:
- You are limited-risk. One obligation: disclose the chatbot is AI when a user interacts with it.
- Action: add visible AI disclosure to the chat interface.
If you use an AI hiring tool, credit scoring system, or student evaluation AI:
- You are high-risk. All six steps above apply.
- Action: complete Steps 1–6 before August 2, 2026. Start now — Step 3 (vendor documentation) is the longest lead time.
If you are a US company serving EU users:
- The EU AI Act applies to you. Territorial scope follows the same logic as GDPR.
- Action: assess whether any AI you deploy to EU users is Annex III high-risk.
Use the EU AI Act Annex III High-Risk Checklist to map your specific AI system to the relevant Annex III category. For the full compliance roadmap, the EU AI Act Compliance Guide for Small Teams covers each requirement in deployment order.
References
- European Parliament and Council — Regulation (EU) 2024/1689 (EU AI Act)
- European AI Office — EU AI Database
- NIST — AI Risk Management Framework 1.0
- Related: EU AI Act plain-English guide — what the law says in plain language
- Related: EU AI Act human oversight requirements — Article 14 implementation
