Austrian privacy non-profit noyb (none of your business) has sent Meta’s Irish headquarters a cease-and-desist letter, threatening the company with a class action lawsuit if it proceeds with its plans to train users’ data for training its artificial intelligence (AI) models without an explicit opt-in.
The move comes weeks after the social media behemoth announced its plans to train its AI models using public data shared by adults across Facebook and Instagram in the European Union (E.U.) starting May 27, 2025, after it paused the efforts in June 2024 following concerns raised by Irish data protection authorities.
“Instead of asking consumers for opt-in consent, Meta relies on an alleged ‘legitimate interest’ to just suck up all user data,” noyb said in a statement. “Meta may face massive legal risks – just because it relies on an ‘opt-out’ instead of an ‘opt-in’ system for AI training.”
The advocacy group further noted that Meta AI is not compliant with the General Data Protection Regulation (GDPR) in the region, and that, besides claiming that it has a “legitimate interest” in taking user data for AI training, the company is also limiting the right to opt-out before the training has started.
Noyb also pointed out that even if 10% of Meta’s users expressly agree to hand over the data for this purpose, it would amount to enough data points for the company to learn E.U. languages.
It’s worth pointing out that Meta previously claimed that it needed to collect this information to capture the diverse languages, geography, and cultural references of the region.
“Meta starts a huge fight just to have an opt-out system instead of an opt-in system,” noyb’s Max Schrems said. “Instead, they rely on an alleged ‘legitimate interest’ to just take the data and run with it. This is neither legal nor necessary.”
“Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data – and generate even better models than Meta.”
The privacy group also accused the company of moving ahead with its plans by putting the onus on users and pointed out that national data protection authorities have largely stayed silent on the legality of AI training without consent.
“It therefore seems that Meta simply moved ahead anyways – taking another huge legal risk in the E.U. and trampling over users’ rights,” noyb added.
In a statement shared with Reuters, Meta has rejected noyb’s arguments, stating they are wrong on the facts and the law, and that it has provided E.U. users with a “clear” option to object to their data being processed for AI training.
This is not the first time Meta’s reliance on GDPR’s “legitimate interest” to collect data without explicit opt-in consent has come under scrutiny. In August 2023, the company agreed to change the legal basis from “legitimate interest” to a consent-based approach to process user data for serving targeted ads for people in the region.
The disclosure comes as the Belgian Court of Appeal ruled the Transparency and Consent Framework, used by Google, Microsoft, Amazon, and other companies to obtain consent for data processing for personalized advertising purposes, is illegal across Europe, citing violation of several principles of GDPR laws.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
Leave feedback about this