EU Approves Meta’s Use Of Social Media Content To Train AI
Artificial intelligence feeds on data. But how far can it draw from our digital lives? The answer is taking a new turn in Europe. Meta has just received approval from European regulators to train its AI models on public content shared by its users. A decision that raises as many technological hopes as ethical questions.
Meta crosses a regulatory milestone: AI fueled by social networks
The green light given to Meta, which recently launched in France, is not a blank check. The company will be able to use posts, comments, and queries addressed to its AI assistant on Facebook, Instagram, WhatsApp, or Messenger. But beware: private messages between close contacts and data of minors under 18 remain off-limits. A crucial distinction, presented as a “guarantee” by Meta.
Why this choice? The company argues that Europe’s linguistic and cultural diversity demands models capable of grasping local dialects, Nice’s sarcastic humor, or Berlin’s hyperlocal references. “Without this variety, AI would become an awkward tool, unable to understand the nuances that make our platforms come alive“, it explains. A compelling argument, but not enough to ease concerns.
European users nevertheless have a way out: an opt-out form, promised to be easily accessible.
It remains to be seen whether this mechanism will be as visible as a fleeting Instagram story… or buried in the depths of account settings. Meta assures it wants to “empower” its users. A delicate balance between transparency and practicality.
A changing regulatory context: between opportunities and challenges
This authorization does not come out of nowhere. It closes a standoff that began in July 2023, when the NGO None of Your Business obtained a freeze of the project, denouncing possible exploitation of retrospective personal data.
After months of negotiations, Meta convinced the European Data Protection Board that its approach respected the legal framework. A turnaround that illustrates the tensions between innovation and privacy.
Meta is not a lone pioneer. Google and OpenAI have already used European data for their AI, while X (formerly Twitter) had to give up training Grok on EU user information. The difference? Meta is relying on legitimacy by example: “We follow established players”, it emphasizes. A clever strategy, but one that could feed a vicious cycle of normalization.
In this shifting landscape, the EU attempts to set guardrails. Its law on artificial intelligence, which came into effect in August 2024, regulates data quality, security, and privacy. Yet, investigations are multiplying, such as the one targeting Google for possible shortcomings in the development of its models. Proof that the current framework remains an open project, where each authorization creates a precedent to monitor.
In this shifting landscape, the EU attempts to set guardrails. Its AI law, effective since August 2024, regulates data quality, security, and privacy. Still, investigations are increasing, like the one against Google for potential faults in developing its models. Proof that the current framework remains an open task, where each authorization sets a precedent to watch—especially since Google is also changing rules for crypto ads in Europe with MiCA, illustrating the growing scope of digital regulation on multiple fronts.
Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
Fascinated by Bitcoin since 2017, Evariste has continuously researched the subject. While his initial interest was in trading, he now actively seeks to understand all advances centered on cryptocurrencies. As an editor, he strives to consistently deliver high-quality work that reflects the state of the sector as a whole.
The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.