AI Policy Desk · Governance

EU AI Act Deadline Extension: Digital Omnibus for Small Teams

The EU Digital Omnibus proposes pushing the high-risk AI compliance deadline to late 2027, but trilogue is live and the August 2026 deadline still…

Back to blog

The EU Digital Omnibus negotiations moved quickly in March. The EU Council agreed its position on March 13 and the European Parliament confirmed its stance on March 26. Trilogue talks between the two institutions began the same week, with a provisional deal targeted for April 28, 2026. The headline proposal: push the main EU AI Act deadline extension for high-risk AI systems from August 2, 2026 to December 2027 (stand-alone systems) or August 2028 (embedded in products).

For small teams watching from the sidelines, this sounds like breathing room. It is not — yet.

Key Takeaways

Summary

The EU AI Act's Digital Omnibus package is the most significant proposed change to the EU AI Act since it came into force. It would extend deadlines, simplify documentation for smaller organizations, and add new prohibitions. But it remains a negotiating text, not law. Organizations that paused compliance work betting on an extension will be caught unprepared if trilogue collapses or produces a narrower deal than the drafts suggest.

This article explains what is proposed, what is already in force, and the minimum steps every small team should be completing right now regardless of how Omnibus lands.

What the Digital Omnibus Actually Proposes

The Digital Omnibus is a broad EU package aimed at reducing administrative burden across several digital regulations simultaneously. For the AI Act specifically, the Council's agreed position proposes:

Deadline extensions for high-risk AI compliance:

Lighter documentation for SMEs:

New prohibitions added:

What is not changing:

What Is Already In Force Right Now

Two sets of EU AI Act requirements are already active and unaffected by the Omnibus debate:

Since February 2, 2025:

Since August 2, 2025:

If your team deploys a customer-facing chatbot, generates marketing or legal content using AI, or uses a tool built on a GPAI model, you already have live obligations. If you have not yet mapped which tools you use and where their outputs go, an AI tool register is the right starting point.

The Risks of Waiting

Three scenarios explain why pausing compliance is the wrong call:

Scenario 1: Trilogue stalls. The Council and Parliament have different positions on several points — including how broadly state aid rules interact with AI Act obligations, and the exact scope of the SME carve-out. If talks stall past June, no Omnibus deal is in place before the August 2 deadline.

Scenario 2: The SME carve-out is narrowed. The Parliament's position on SME documentation relief is less generous than the Council's. Final text could limit or condition the carve-out in ways that exclude your organization.

Scenario 3: You need the preparation time anyway. Even if the deadline extends to December 2027, you will need to complete an AI system inventory, classify your systems under Annex III, draft technical documentation, and establish post-market monitoring before the new deadline. Starting in Q3 2027 for a year-end deadline is still very tight.

Governance Goals for Your Team

Regardless of how the Omnibus lands, a small team's AI Act compliance preparation should deliver these outcomes by August 2026. If you do not yet have a written AI governance policy, an AI governance policy template gives you a structured starting point:

Risks to Watch

Guidance vacuum: The Commission's finalized transparency code of practice for AI-generated content, expected in early 2026, has been delayed. Organizations implementing Article 50 watermarking have limited official guidance on acceptable technical standards.

Vendor pass-through obligations: If you use a third-party AI system that your vendor has classified as high-risk, you are a deployer with your own obligations (Article 26). Many vendor contracts do not yet address who provides what documentation to whom. Use an AI vendor due diligence checklist to audit what your suppliers can actually provide.

Classification ambiguity: Article 6 guidance on what counts as "high-risk" under Annex III was late. Edge cases — AI-assisted HR tools, credit scoring decision aids, safety component sub-systems — remain genuinely unclear until the Commission publishes interpretive guidance.

Controls: What to Actually Do

This week:

This quarter (before August):

Ongoing:

Checklist (Copy/Paste)

Implementation Steps

  1. Week 1: Run an AI system inventory workshop. Pull every tool from expense systems, engineering wikis, and IT asset registers. Aim for completeness over perfection.
  2. Week 2: Apply the Annex III checklist to each system. Flag anything that touches employment decisions, access to education, essential services, or consumer credit.
  3. Week 3: Audit Article 50 compliance for live chatbots and AI-generated content pipelines.
  4. Month 2: Engage vendors on Annex IV documentation. Request technical documentation summaries; track which vendors cannot provide them.
  5. Month 3: Complete draft technical documentation for any high-risk systems. Run a gap review against the standard Annex IV structure.
  6. Ongoing: Assign a single owner for EU AI Act compliance tracking. Review trilogue updates as they come.

Frequently Asked Questions

Q: If the Omnibus deal passes before August, do we need to do anything differently? A: The core compliance work — inventory, classification, documentation, Article 50 checks — is required under both scenarios. An extension changes the deadline by which regulators can enforce, not the underlying obligations. Complete the work and use any extension as time to improve quality, not to start.

Q: Does the SME carve-out apply to us if we are under 750 employees? A: Possibly, but only if the carve-out survives trilogue in its current form and your organization meets both the headcount and revenue thresholds. Do not design your compliance programme around a provision that is not yet law.

Q: We only use third-party AI tools, not build our own. Does the AI Act apply? A: Yes. As a deployer of high-risk AI systems, Article 26 imposes obligations on you: ensuring the system is used in accordance with its instructions, monitoring performance, and informing your own users. Deployers are also subject to Article 50 transparency obligations.

Q: What counts as AI-generated content under Article 50? A: The Act covers synthetic audio, image, video, and text generated by AI. Marketing copy, legal summaries, social media posts, and reports generated using AI tools may all be in scope. The technical standard for machine-readable marking is still being finalized by the European AI Office.

Q: What is the penalty for non-compliance with Article 50? A: Fines for violations of transparency obligations can reach €15M or 3% of global annual turnover, whichever is higher. For small teams, the proportionality provisions mean enforcement action will likely target egregious or repeat violations first — but the reputational risk of a formal finding applies to organizations of all sizes.

References

  1. EU Council press release — Council agrees position to streamline rules on AI: https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/
  2. EU AI Act implementation timeline and August 2026 deadline analysis (Kennedy's Law, March 2026): https://www.kennedyslaw.com/en/thought-leadership/article/2026/the-eu-ai-act-implementation-timeline-understanding-the-next-deadline-for-compliance/
  3. EU AI Act official text — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
  4. NIST AI Risk Management Framework: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
  5. European AI Office — AI Act guidance and standards tracker: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence