Meta has halted its plan to use public data from its European users on Facebook and Instagram to train its Llama family of generative AI models. The decision, announced Friday, follows significant pressure from the Irish Data Protection Commission (DPC), Meta’s lead privacy regulator in the European Union, and complaints from privacy advocacy groups.
The company had intended to begin using public posts, photos, and their captions from adult users across the EU and the UK to train the next iterations of its powerful AI. However, the plan drew immediate fire from the advocacy group NOYB (None of Your Business), which filed 11 complaints across various European countries, arguing the “opt-out” system was overly complex and violated users’ rights under the General Data Protection Regulation (GDPR).
In response to the backlash, the DPC requested that Meta pause the training process. “We are disappointed by the request from the Irish Data Protection Commission… particularly since we incorporated regulatory feedback and the European DPAs have been informed since March,” Meta stated in a blog post. The company argued that withholding local data would result in a “second-rate experience” for European users and hinder innovation in the region.
This development marks a significant victory for privacy advocates and underscores the fundamental conflict between the massive data requirements of modern AI development and Europe’s stringent data protection laws. While Meta asserts its process is legally compliant, the regulatory intervention forces the company back to the drawing board. For now, the Llama models will not be trained on recent public content from European users, potentially impacting their ability to understand local languages, cultures, and current events with the same nuance as in other regions. The pause highlights the growing power of regulators to shape the future of artificial intelligence within their borders.


