Why Meta is playing with fire in Europe
The company is testing yet again the limits of the GDPR
Meta clearly likes to play with fire. Already heavily fined for violating the European General Data Protection Regulation (GDPR), the tech giant from Menlo Park is now taking another risk—one that could result in a significant penalty. Starting May 27, it plans to use messages and comments posted on Facebook and Instagram, but not private messages, to "train and improve" its generative AI models.
Crucially, it does not plan to ask users for their consent. To justify its decision, Meta is relying on an opinion issued in late 2024 by the European Data Protection Board (EDPB), but its interpretation is being contested. As a result, the consumer protection agency of North Rhine-Westphalia, Germany’s most populous region, issued a formal notice on Monday.
Meta isn’t alone. In April, Ireland’s data protection authority (DPC) launched an investigation into xAI, Elon Musk’s AI startup, which has officially acquired X (formerly Twitter). Both companies are racing to capitalize on their competitive edge: a massive reservoir of user messages—something OpenAI and Google have had to obtain through paid agreements.
“Strictly necessary”
In doing so, Meta and xAI are testing the boundaries of the GDPR. Implemented in 2018, before the rise of generative AI, the regulation requires “freely given” and “informed” consent from users before their personal data can be used. But Meta and X are choosing not to seek that consent, knowing most users likely wouldn’t give it. Instead, they are offering only an opt-out system.
Meta invokes the “legitimate interest” principle, a clause in the GDPR that allows data use without consent under certain conditions. This isn't a new tactic: the company previously tried the same argument in the context of targeted advertising. But it was struck down by European courts. In this case, Meta claims it received a green light from the EDPB.
The Board, which brings together national data protection authorities, was consulted by Ireland’s DPC. It concluded in December that “legitimate interest” could apply to AI development, but only if data use is deemed “strictly necessary”. That leaves room for interpretation. Austrian privacy activist Max Schrems argues that Meta’s practices are “clearly in violation of the GDPR.”
Meta reserved position
Even Meta’s legal teams seem uncertain about their stance. When the company announced in March the European launch of Meta AI, its AI assistant, it promised that European users’ data would not be used to train its models. Yet it reversed that position less than a month later.
Another issue could work against Meta: “It cannot be ruled out that particularly sensitive information protected under the GDPR might be used in training the AI,” notes Christine Steffen from the consumer protection agency of North Rhine-Westphalia.
Meta’s strategy appears highly risky, and another fine cannot be ruled out. But the company may see this as an acceptable price to pay in order to stay competitive in AI, which is now central to its strategy.