TL;DR: The Ada Lovelace Institute warns that leaked EU Digital Omnibus proposals—framed as regulatory simplification—would fundamentally weaken GDPR and AI Act protections. Changes include redefining personal data to enable anonymisation loopholes, legitimising AI training on sensitive data without consent, and permitting automated decision-making at organisational discretion. Over 126 organisations urge the Commission to retract the proposals.
Personal Data Redefinition Creates Compliance Loopholes
The most consequential GDPR modification targets the definition of personal data itself. Currently, data is protected if it relates to an identified or identifiable person, regardless of who processes it. The proposed change would apply GDPR protections only if the specific data controller possesses means to identify individuals.
This shift enables what the Institute describes as a “race to the bottom” in anonymisation standards. If controllers need not stress-test anonymisation techniques, they can classify data as anonymous and share it with entities possessing stronger identification capabilities—exposing individuals to downstream abuse whilst claiming compliance.
The proposal ignores data’s global and dynamic nature, circulating through processing chains where different entities hold varying identification capabilities. With separate proposals removing record-keeping obligations for organisations under 750 employees, regulators face diminished ability to investigate potential abuses.
Sensitive Data Protections Substantially Weakened
Article 9 modifications would limit special category data protections to information directly revealed by data subjects, excluding inferred sensitive data derived from correlations. Shopping preferences indicating pregnancy, location data revealing visits to hospitals or gay bars, and browsing history suggesting political affiliation would lose protection.
This change legitimises profiling, behavioural targeting and predictive analytics through increased “comparison, cross-referencing or deduction” of sensitive information. The Institute warns this enables political targeting threatening democratic norms throughout Europe, with vulnerable populations particularly exposed to manipulation through inferred psychological and financial data.
AI Training Exemptions Override Data Protection
New GDPR Article 88c would establish AI development and operation as legitimate interests for processing personal data, with broad discretion for organisations to determine whether their interests interfere with individual rights. The formulation includes AI system “operation” without defining scope, creating uncertainty about excluded processing activities.
Further provisions permit processing special category data for AI training where removal would require “disproportionate effort”—a determination left to AI developers’ technical and financial assessments. This flips proportionality tests upside down: instead of individual rights measuring when processing is proportionate, companies’ self-assessed burden determines protection levels.
The proposal mentions unconditional rights to object, but acknowledges data scraping’s messy nature makes exercising such rights unlikely. Individuals whose data appears in scraped datasets may lack knowledge or means to object, whilst companies can retain sensitive data if removal “requires re-engineering the AI system or AI model.”
Automated Decision-Making Legitimised Without Consent
Proposed Article 22 modifications would permit automated decision-making if data controllers deem it “necessary” for contractual relations, regardless of whether less-intrusive means exist. This shifts determination from objective necessity under European law to organisational discretion.
The Institute cites employment contexts where automated systems allocating shifts and setting pay could operate without accounting for absence reasons or productivity variations, potentially leading to unfair practices, discrimination and limited appeal options. Companies could extend such systems to pre-hiring risk profiling, assessing prospective employees through CV-extracted information linked to existing worker data.
AI Act Transparency Requirements Reduced
Before full implementation, the AI Act faces modifications weakening high-risk system visibility. Providers self-assessing systems as non-high-risk would no longer register in the EU-wide database, reducing transparency and eluding accountability measures.
Quality Management System simplifications would extend to all SMEs including startups, despite such organisations quickly reaching large operational scales. Less rigorous QMS could compromise safety element documentation, complicating audits and incident tracing.
Evidence Base Questioned
The Institute emphasises that streamlining cannot come at societal expense without evidence-based justification. The GDPR rulebook remains immature after seven years—short for any regulation—with enforcement known to be under-resourced. Businesses have invested significantly in compliance capabilities that proposed changes would disrupt.
For the AI Act, only just entering implementation, little data exists to judge Omnibus implications or evidence fundamental cases for rule changes. The Institute, alongside 126 organisations, urges the Commission to retract proposals conflicting with the Charter of Fundamental Rights and Convention 108, arguing modifications require robust evidence and comprehensive fundamental rights impact assessment rather than fast-track omnibus procedures.
The Digital Omnibus covering data and AI is expected for official publication on 19 November, proceeding to European Parliament and Council review. Even if some proposals are dropped, the Commission’s consideration of such profound changes merits scrutiny.
Source: Ada Lovelace Institute