Editor’s Note: The EU’s biggest AI rules just got their first major reset, pending formal adoption. In the early hours of May 7, 2026, Council and Parliament negotiators landed a provisional agreement on the Digital Omnibus on AI that, if adopted before the original deadline, would push Annex III high-risk obligations from Aug. 2, 2026, to Dec. 2, 2027, and push Annex I product-embedded AI to Aug. 2, 2028. The deal also adds an Article 5 categorical ban on AI systems that create child sexual abuse material or non-consensual intimate imagery — a prohibition that, per law-firm analyses, also reaches general-purpose generative AI providers absent reasonable technical and policy controls. Some legal analyses read the provisional agreement as creating a Dec. 2, 2026, compliance window for related obligations, but the precise Article 5 timing should be verified against the final legal text.
For cybersecurity, data privacy, regulatory compliance and eDiscovery professionals, the article matters on three fronts. The deferral changes vendor compliance and procurement timing for AI-augmented identity, fraud, hiring and credit-decisioning tools across regulated sectors. The bias-mitigation legal basis expands sensitive-data processing options on a strict-necessity standard. And the Article 5 expansion creates a new layer of regulatory exposure for matters where AI-generated CSAM or non-consensual intimate imagery surfaces in litigation or internal investigations.
Watch June for formal adoption, Official Journal publication, and the AI Office’s first enforcement guidance on the new prohibition.
Content Assessment: EU AI Act deal would delay high-risk rules to 2027, ban abusive AI content
Information - 94%
Insight - 92%
Relevance - 92%
Objectivity - 93%
Authority - 92%
93%
Excellent
A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "EU AI Act deal would delay high-risk rules to 2027, ban abusive AI content."
Industry News – Artificial Intelligence Beat
EU AI Act deal would delay high-risk rules to 2027, ban abusive AI content
ComplexDiscovery Staff
EU lawmakers reached a political deal in the early hours of May 7 on the Digital Omnibus on AI that would push the bloc’s flagship high-risk provisions into late 2027 and outlaw AI systems that generate child sexual abuse material or non-consensual nude imagery if formally adopted. The agreement, struck around 4:30 a.m. Brussels time after a trilogue that ran past midnight, ends weeks of doubt over whether the EU AI Act would land on its original Aug. 2, 2026 schedule.
If formally adopted before the original deadline, it will not. Annex III obligations — covering AI used in hiring, credit scoring, biometric identification, education, essential services and law enforcement — would apply from Dec. 2, 2027, the Council of the EU and the European Parliament said. AI embedded in regulated products covered by sectoral safety law, the Annex I category that includes medical devices, machinery and lifts, would shift to Aug. 2, 2028. The Cypriot Council Presidency, which negotiated the deal alongside Parliament, said the agreement “ensures legal certainty and a smoother and more harmonized implementation of the rules across the Union,” according to the Council press service. The provisional agreement still requires formal endorsement and a legal-linguistic scrub; the original Aug. 2, 2026, schedule applies until adoption.
For cybersecurity, information governance and eDiscovery professionals tracking how the AI Act lands on global vendors and their customers, the calendar reset matters less than the fine print attached to it.
A new Article 5 ban on AI-generated CSAM and nudifier tools
Article 5 of the AI Act, which lists prohibited AI practices, is being expanded to include a new categorical ban on AI systems that generate child sexual abuse material, or that depict the intimate parts of an identifiable person, or such a person engaged in sexually explicit activities, without that person’s consent. The prohibition reaches images, video and audio. It targets “nudifier” applications directly. Law-firm analyses by Lewis Silkin and Hogan Lovells read the new Article 5 language as exposing providers of general-purpose generative AI tools to liability if they do not implement reasonable technical and policy controls to prevent such content generation. Some legal analyses read the provisional agreement as creating a Dec. 2, 2026, compliance date for related obligations, but the precise Article 5 timing should be verified against the final legal text.
Whatever the final timing, the prohibition deserves attention from corporate counsel and digital forensics teams. If adopted as expected, the Article 5 ban would arrive well before the deferred high-risk deadlines, even though the precise compliance window remains subject to final-text confirmation. The ban converts conduct already harmful under member-state criminal codes into a categorical AI Act violation. Penalties under Article 99 reach 35 million euros, or 7 percent of total worldwide annual turnover, whichever is higher. Legal teams managing internal investigations or eDiscovery matters that surface AI-generated CSAM or non-consensual intimate imagery now face a layered analysis: criminal exposure, civil liability and regulatory exposure under Article 5, in addition to platform takedown duties and victim-notification obligations under member-state law.
How the deal nearly broke down
The deal closed a file that nearly broke down. On April 28, the second political trilogue collapsed at hour 12, with Council and Parliament unable to agree on the conformity-assessment architecture for AI in regulated products. The dispute turned on whether Section A products such as machinery and medical devices should remain in a combined AI Act and sectoral assessment, or shift to Section B for primarily sectoral handling, according to a post-mortem by international law firm Bird & Bird and analysis from Belgian technology-law firm Timelex. The Cypriot Presidency reopened the file the following week. The May 7 compromise allows conformity-assessment bodies to submit a single application and undergo a single assessment for designation under both the AI Act and the relevant Annex I product legislation — a streamlined designation route the Council described as a way to avoid duplicative procedures in regulated sectors.
What else changed in the package
Several other elements of the package will reach legal-tech vendors and their corporate customers fast. Regulatory exemptions previously reserved for small and medium-sized enterprises now extend to small mid-cap companies, broadening the universe of providers that benefit from lighter compliance treatment. Sensitive-data processing for bias detection and mitigation is allowed across all AI systems, not just high-risk ones, on a strict-necessity basis. The AI Office has reinforced supervisory authority for general-purpose AI systems where the model and the system are developed by the same provider, with carve-outs preserving national competence for law enforcement, border management, judicial authorities and financial institutions, according to the Council statement and IAPP analysis. Watermarking and labeling obligations for AI-generated content would take effect Dec. 2, 2026, after a three-month implementation window under the provisional agreement — tighter than the six-month grace period the Commission originally proposed.
Reactions split between industry and rights advocates
Brando Benifei, one of Parliament’s leading figures in the original AI Act negotiations, has argued that simplification should not weaken safeguards. Civil-society readings of Thursday’s deal have been less charitable. In an analysis published by TechPolicy.Press, Laura Caroli — formerly Benifei’s lead negotiator on the AI Act and now a senior fellow at the Wadhwani AI Center at the Center for Strategic and International Studies — warned that the deferral risks weakening Europe’s AI standardization process and pointed to AI used in hiring systems as one category where products placed on the market before the new compliance deadline could remain outside the AI Act indefinitely. Both the supportive and critical readings will outlive Thursday’s headlines.
What happens next
The package still requires formal endorsement from Council and Parliament plus a legal-linguistic scrub, expected within weeks rather than months. Once adopted and translated into the EU’s official languages, the amendments will publish in the Official Journal and enter into force three days later. The Cypriot Presidency, which holds the Council presidency until June 30, has said it intends to close the file before the next rotating presidency begins.
What practitioners should do this year
Practitioners watching this from outside Brussels should resist the temptation to treat the deferral as a pause. Annex III mapping work — identifying which existing systems fall within the high-risk categories, what gap-assessment evidence is needed, and which conformity-assessment route applies — remains the right work for this calendar year. Cybersecurity teams should pay close attention to the biometric identification and access-to-essential-services Annex III categories, where AI-augmented identity verification, fraud scoring and authentication tools commonly sit; vendor roadmap diligence on conformity-assessment readiness belongs in 2026 procurement reviews, not 2027 ones. Information governance teams have a narrow runway to map sensitive-data processing flows that will rely on the new bias-mitigation legal basis and to document the strict-necessity test as it is applied. EDiscovery vendors and forensic teams should add the Article 5 nudifier and CSAM prohibition to their detection-and-response playbooks now, because the regulatory clock for the new prohibition is widely read as running faster than the high-risk one — though the exact Article 5 compliance window remains pending the final legal text.
Near-term milestones to track: formal adoption by Council and Parliament, expected in June; publication in the Official Journal; the AI Office’s first enforcement guidance on the new Article 5 prohibition; and harmonized standards delivery from the European Standardisation Organisations’ AI standards committee, CEN-CENELEC JTC 21, where standards work has run behind schedule and remains the practical foundation for high-risk conformity assessments.
Three jurisdictions, three clocks
The transatlantic backdrop sharpens the stakes. The Trump administration’s December 2025 executive order on AI tilts U.S. policy toward federal pre-emption and lighter regulatory weight, while the United Kingdom continues a principles-based approach. Multinational AI vendors selling into all three markets now plan around three different compliance clocks — and three different theories of what AI risk means.
What does it mean for compliance leaders that the EU’s most consequential AI rules now land in late 2027 — and for the broader question of whether deferral signals confidence or capitulation?
News sources
- Artificial intelligence: Council and Parliament agree to simplify and streamline rules (Council of the EU)
- AI Act Omnibus: What just happened and what comes next? (IAPP)
- EU legislators agree to delay for high-risk AI rules (Hogan Lovells)
- The Council and Parliament agree to slim down and delay parts of the EU AI Act (Lewis Silkin)
- EU AI Act Delayed: The Omnibus Deal Closed on 7 May 2026 (Modulos)
- What the EU AI Omnibus Deal Changes for the AI Act and What Lies Ahead (TechPolicy.Press)
- EU reaches AI Act omnibus deal to simplify high-risk compliance and ban nudification apps (INSIGHT EU MONITORING)
- Digital Omnibus on AI: the Provisional Agreement of 7 May 2026 (NicFab Blog)
- AI Act: deal on simplification measures, ban on nudifier apps (European Parliament)
- Digital Omnibus on AI Trilogue Stalls Ahead of the AI Act Deadline (Bird & Bird)
- EU AI Act gets its first real haircut – high-risk deadlines pushed to 2027 (PPC Land)
- The AI Omnibus deal: what survived the trilogue? (Timelex)
Assisted by GAI and LLM Technologies
Additional reading
- China’s Meta-Manus block adds new risk layer to cross-border AI diligence
- Stakeholder governance gets a stricter audit
- Andrew Haslam’s eDisclosure Systems Buyers Guide at 14: What the 1H 2026 update reveals
- A Complete Analysis of the Winter 2026 eDiscovery Pricing Survey
- The M&A Risk of Confusing Market Velocity with Marketing Capability
- Confidence Meets Complexity: Full Results from the 2H 2025 eDiscovery Business Confidence Survey
- Making the Subjective Objective: A Scoring Framework for Evaluating eDiscovery Vendor Viability in 2026
- eDiscovery Vendor Viability Scoring Tool: Making the Subjective Objective
- Beyond Public Cloud: The Enduring Case for Deployment Flexibility in eDiscovery
Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

























