The EU Digital Omnibus negotiations moved quickly in March. The EU Council agreed its position on March 13 and the European Parliament confirmed its stance on March 26. Trilogue talks between the two institutions began the same week, with a provisional deal targeted for April 28, 2026. The headline proposal: push the main EU AI Act deadline extension for high-risk AI systems from August 2, 2026 to December 2027 (stand-alone systems) or August 2028 (embedded in products).
For small teams watching from the sidelines, this sounds like breathing room. It is not — yet.
Key Takeaways
- The August 2, 2026 EU AI Act deadline for high-risk AI systems is still legally in force until a final Omnibus deal is signed and published.
- Trilogue is targeting April 28 for a deal — that leaves less than four weeks and negotiations can stall.
- The proposed SME documentation carve-out (under 750 employees / €150M turnover) is not yet law; do not rely on it.
- Article 50 transparency obligations — watermarking AI-generated content, disclosing chatbot interactions — have already applied since February 2025 and are not changed by the Omnibus.
- The right posture: continue compliance preparations, treat any deadline extension as a bonus, not a plan.
Summary
The EU AI Act's Digital Omnibus package is the most significant proposed change to the EU AI Act since it came into force. It would extend deadlines, simplify documentation for smaller organizations, and add new prohibitions. But it remains a negotiating text, not law. Organizations that paused compliance work betting on an extension will be caught unprepared if trilogue collapses or produces a narrower deal than the drafts suggest.
This article explains what is proposed, what is already in force, and the minimum steps every small team should be completing right now regardless of how Omnibus lands.
What the Digital Omnibus Actually Proposes
The Digital Omnibus is a broad EU package aimed at reducing administrative burden across several digital regulations simultaneously. For the AI Act specifically, the Council's agreed position proposes:
Deadline extensions for high-risk AI compliance:
- Stand-alone high-risk AI systems: compliance deadline moves from August 2, 2026 to December 31, 2027
- High-risk AI systems embedded in regulated products (medical devices, machinery, etc.): deadline moves to August 2, 2028
- General-purpose AI (GPAI) model obligations: no proposed change — these applied from August 2025
Lighter documentation for SMEs:
- Organizations with fewer than 750 employees and less than €150M annual turnover would qualify for simplified Annex IV technical documentation
- This carve-out only applies if it survives trilogue in its current form
New prohibitions added:
- AI-generated child sexual abuse material (CSAM) explicitly prohibited
- Non-consensual intimate imagery generated by AI explicitly prohibited
What is not changing:
- Article 50 transparency obligations (already in force since February 2025)
- The prohibition on unacceptable-risk AI practices (biometric categorization, social scoring, real-time remote biometric identification) — in force since February 2025
- GPAI model transparency and safety obligations for frontier models
What Is Already In Force Right Now
Two sets of EU AI Act requirements are already active and unaffected by the Omnibus debate:
Since February 2, 2025:
- Bans on unacceptable-risk AI (Article 5): subliminal manipulation, exploitation of vulnerable groups, real-time biometric surveillance by public authorities in public spaces, social scoring by public bodies
- Transparency for certain AI systems (Article 50): users must be told when they are interacting with a chatbot (unless obvious); AI-generated content must be marked as AI-generated in a machine-readable format
Since August 2, 2025:
- General-purpose AI model providers must maintain technical documentation, comply with EU copyright law, and publish usage policy summaries
- Frontier GPAI model providers (>10²³ FLOPs training compute) face additional safety evaluations and incident reporting obligations
If your team deploys a customer-facing chatbot, generates marketing or legal content using AI, or uses a tool built on a GPAI model, you already have live obligations. If you have not yet mapped which tools you use and where their outputs go, an AI tool register is the right starting point.
The Risks of Waiting
Three scenarios explain why pausing compliance is the wrong call:
Scenario 1: Trilogue stalls. The Council and Parliament have different positions on several points — including how broadly state aid rules interact with AI Act obligations, and the exact scope of the SME carve-out. If talks stall past June, no Omnibus deal is in place before the August 2 deadline.
Scenario 2: The SME carve-out is narrowed. The Parliament's position on SME documentation relief is less generous than the Council's. Final text could limit or condition the carve-out in ways that exclude your organization.
Scenario 3: You need the preparation time anyway. Even if the deadline extends to December 2027, you will need to complete an AI system inventory, classify your systems under Annex III, draft technical documentation, and establish post-market monitoring before the new deadline. Starting in Q3 2027 for a year-end deadline is still very tight.
Governance Goals for Your Team
Regardless of how the Omnibus lands, a small team's AI Act compliance preparation should deliver these outcomes by August 2026. If you do not yet have a written AI governance policy, an AI governance policy template gives you a structured starting point:
- AI system inventory completed: every AI system your organization deploys is catalogued, with ownership and use-case documented
- Risk classification done: each system assessed against Annex III high-risk categories; classification rationale documented
- Article 50 compliance live: customer-facing chatbots identify themselves as AI; any AI-generated content your organization publishes is marked in a machine-readable format
- Annex IV documentation started: for any high-risk systems, draft the technical documentation even if you expect a deadline extension — it will reveal gaps that take time to fix
Risks to Watch
Guidance vacuum: The Commission's finalized transparency code of practice for AI-generated content, expected in early 2026, has been delayed. Organizations implementing Article 50 watermarking have limited official guidance on acceptable technical standards.
Vendor pass-through obligations: If you use a third-party AI system that your vendor has classified as high-risk, you are a deployer with your own obligations (Article 26). Many vendor contracts do not yet address who provides what documentation to whom. Use an AI vendor due diligence checklist to audit what your suppliers can actually provide.
Classification ambiguity: Article 6 guidance on what counts as "high-risk" under Annex III was late. Edge cases — AI-assisted HR tools, credit scoring decision aids, safety component sub-systems — remain genuinely unclear until the Commission publishes interpretive guidance.
Controls: What to Actually Do
This week:
- Confirm whether any AI systems you deploy fall into Annex III high-risk categories (employment, education, essential services, law enforcement, migration, justice, critical infrastructure)
- Verify your customer-facing chatbots comply with Article 50 disclosure requirements — if they do not, this is an active violation
- Review your AI-generated content publication workflow for machine-readable marking
This quarter (before August):
- Complete an AI system inventory covering every tool, API, and model used internally and externally
- For each Annex III candidate, draft a risk classification memo with the reasoning
- Begin Annex IV technical documentation — even a partial draft reveals what you need from vendors
- Add EU AI Act review to your standard vendor due diligence checklist
Ongoing:
- Monitor trilogue progress — a deal signed before August changes timelines but not the work
- Subscribe to the European AI Office's publication list for guidance documents
Checklist (Copy/Paste)
- Map all AI systems in use — internal tools, customer-facing products, and third-party APIs
- Classify each system against Annex III high-risk categories; document rationale
- Verify Article 50 chatbot disclosure compliance for all customer-facing AI
- Implement machine-readable AI-generated content marking for published material
- Begin Annex IV technical documentation for any Annex III high-risk candidates
- Review vendor contracts for deployer obligation pass-throughs
- Add "EU AI Act high-risk classification" to vendor onboarding checklist
- Monitor trilogue — target deal date is April 28, 2026
- Do not pause preparations pending Omnibus outcome
Implementation Steps
- Week 1: Run an AI system inventory workshop. Pull every tool from expense systems, engineering wikis, and IT asset registers. Aim for completeness over perfection.
- Week 2: Apply the Annex III checklist to each system. Flag anything that touches employment decisions, access to education, essential services, or consumer credit.
- Week 3: Audit Article 50 compliance for live chatbots and AI-generated content pipelines.
- Month 2: Engage vendors on Annex IV documentation. Request technical documentation summaries; track which vendors cannot provide them.
- Month 3: Complete draft technical documentation for any high-risk systems. Run a gap review against the standard Annex IV structure.
- Ongoing: Assign a single owner for EU AI Act compliance tracking. Review trilogue updates as they come.
Frequently Asked Questions
Q: If the Omnibus deal passes before August, do we need to do anything differently? A: The core compliance work — inventory, classification, documentation, Article 50 checks — is required under both scenarios. An extension changes the deadline by which regulators can enforce, not the underlying obligations. Complete the work and use any extension as time to improve quality, not to start.
Q: Does the SME carve-out apply to us if we are under 750 employees? A: Possibly, but only if the carve-out survives trilogue in its current form and your organization meets both the headcount and revenue thresholds. Do not design your compliance programme around a provision that is not yet law.
Q: We only use third-party AI tools, not build our own. Does the AI Act apply? A: Yes. As a deployer of high-risk AI systems, Article 26 imposes obligations on you: ensuring the system is used in accordance with its instructions, monitoring performance, and informing your own users. Deployers are also subject to Article 50 transparency obligations.
Q: What counts as AI-generated content under Article 50? A: The Act covers synthetic audio, image, video, and text generated by AI. Marketing copy, legal summaries, social media posts, and reports generated using AI tools may all be in scope. The technical standard for machine-readable marking is still being finalized by the European AI Office.
Q: What is the penalty for non-compliance with Article 50? A: Fines for violations of transparency obligations can reach €15M or 3% of global annual turnover, whichever is higher. For small teams, the proportionality provisions mean enforcement action will likely target egregious or repeat violations first — but the reputational risk of a formal finding applies to organizations of all sizes.
References
- EU Council press release — Council agrees position to streamline rules on AI: https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/
- EU AI Act implementation timeline and August 2026 deadline analysis (Kennedy's Law, March 2026): https://www.kennedyslaw.com/en/thought-leadership/article/2026/the-eu-ai-act-implementation-timeline-understanding-the-next-deadline-for-compliance/
- EU AI Act official text — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- NIST AI Risk Management Framework: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
- European AI Office — AI Act guidance and standards tracker: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence