Meta delays plans to train AI using European user data
Meta won't continue with plans to train AI models using European user data following backlash from privacy groups
Meta has confirmed it will pause plans to train AI systems using data from EU citizens and won't launch Meta AI in the region for the time being.
Outside the EU and UK, the company will continue with its plans to train its AI on users' Facebook and Instagram posts. The tech giant’s justification for using social media data for AI training is that this is content that users have chosen to make public.
"We are following the example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI," the company said.
"Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information."
The company claimed that under the UK’s Data Protection Act and the EU’s GDPR, it had the legal right to collect data on the legal basis of ‘legitimate interests’.
"Specifically, we have legitimate interests in processing data to build these services and this means that people can object using a form found in our Privacy Centre if they wish," it said.
However, data protection authorities aren't happy. The Irish Data Protection Commission (DPC), which regulates Meta in the EU, expressed concerns, along with several national data protection authorities across the region.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Meanwhile, privacy campaign group Noyb filed complaints in 11 European countries, asking their respective data protection authorities to prevent the move before it comes into force at the end of the month.
Now, Meta has bowed to the pressure and announced that it will pause its plans in the UK and EU.
"We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March," the company said in a statement.
Meta insisted that the pushback is a “step backwards for European innovation” and “further delays” bringing the benefits of AI to people across Europe.
Despite the move, Meta said it's still confident its approach complies with European laws and regulations. In addition, the tech giant said that without including local information it would only be able to offer people a 'second-rate experience' - leading it to suspend the launch of Meta AI in the UK and EU.
While the move might be seen as an attempt to call the EU's bluff, it's been welcomed by the DPC, which said it will continue to engage with Meta on the issue.
The UK’s Information Commissioner’s Office (ICO) also welcomed the decision to delay the AI training plans.
"In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset," says Stephen Almond, the ICO's executive director for regulatory risk.
“We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.