Europe ready to sacrifice privacy to catch up on AI
In the race for AI, Brussels considers rolling back regulation
“A gift to Big Tech.” Max Schrems is furious over the European Commission’s plan to revise the General Data Protection Regulation (GDPR). Brussels’ stated goal is to give European companies a competitive edge in training generative AI models amid the global AI arms race. “This would be a massive downgrading of European’s privacy ten years after the GDPR was adopted,” warns the Austrian activist and leading digital rights advocate.
The GDPR isn’t the only law in Brussels’ sights. As part of a new “digital omnibus” to be unveiled next week, the European Commission also intends to postpone the implementation of certain provisions of the AI Act — the landmark regulatory framework adopted in spring 2024 after fierce negotiations. The proposals will still need approval from EU member states and the European Parliament, both far from guaranteed.
“Freely given” and “informed” consent
In force since 2018, the GDPR remains one of the EU’s most emblematic pieces of legislation. Last year, former ECB president Mario Draghi already called for its overhaul in his report on Europe’s competitiveness. Its cornerstone provision requires obtaining users’ “freely given” and “informed” consent before processing their personal data — a safeguard originally designed for online advertising, but now also applicable to generative AI.
In recent months, several tech companies have been testing the regulation’s limits. In May, Meta began using public posts and comments from Facebook and Instagram to “train and improve” its AI models. LinkedIn and Elon Musk’s xAI, which recently acquired X (formerly Twitter), have followed suit. None of these platforms ask for consent, instead invoking the GDPR’s “legitimate interest” clause, based on a contested interpretation of guidance from European data protection authorities.
AI as a “legitimate interest”
According to a leaked draft obtained by German outlet Netzpolitik, the Commission now proposes to eliminate this legal ambiguity by explicitly recognizing the use of personal data for AI training as a “legitimate interest.” In practice, this would mean companies would no longer need prior user consent. They would only have to offer an “unconditional right to object” — typically buried deep in privacy settings and rarely exercised in practice.
Another major shift concerns the definition of a personal data. Information would no longer be treated as such if the company collecting it cannot directly identify the individual concerned, thereby removing it from the GDPR’s scope. Brussels also plans to relax the “enhanced protection” currently granted to sensitive data, which would apply only when such data “directly reveal” racial or ethnic origin, political opinions, health status, or sexual orientation.
France pushes back
The changes are being driven largely by Germany — paradoxically, one of the EU’s strongest privacy advocates. France, by contrast, is calling for “targeted adjustments” and firmly opposes “reopening the GDPR” itself. Poland, Sweden, and several other member states share this cautious stance, while Austria rejects any modification outright. Together, these opponents could form a blocking minority in the European Council, potentially derailing parts of the Commission’s plan.
The simplification push extends beyond the GDPR to other EU data laws, including the little-known ePrivacy directive — the rule responsible for those ubiquitous cookie consent banners. Citing user “fatigue,” Brussels proposes easing requirements for cookies used for analytics and security, and introducing consent alternatives.
Loosening the AI Act
A second front concerns the AI Act itself, which imposes a series of obligations on AI systems, especially those deemed to pose “systemic risks.” Some of its provisions have already taken effect, with others scheduled for gradual rollout through 2027. Facing delays in implementation, the Commission now wants to adjust the timetable to “address the uncertainty and challenges caused by the delay in availability of standards.” The specific measures and new deadlines remain to be detailed.
This represents a policy shift. Over the summer, the Commission had refused to postpone key rules despite the late publication of a Code of Practice to help developers comply. Brussels also proposes a one-year grace period to set up mandatory watermarking for AI-generated content. And it recommends easing documentation, registration, and monitoring requirements.



Thanks for writing this, it clarifies a lot. I totally agree with Max Schrems about it being 'a gift to Big Tech.' That line just nails it. It's wild to think we'd sacrifice privacy like this. Such a supa insightful piece on the AI race.