LinkedIn backtracks on AI training rules after user backlash
UK-based LinkedIn users will now get the same protections as those elsewhere in Europe


LinkedIn has suspended the use of UK user data for AI training following a fierce backlash from digital rights campaigners, users, and regulators.
The social network recently changed its privacy policy to use an ‘opt-out’ setting for the use of customer data to train its internal AI models.
In an update to its policies, the Microsoft-owned firm said the use of user data would help improve generative AI features. The change prompted a backlash among users on social media, with digital rights campaigners urging users to opt-out of the scheme.
"Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide)," LinkedIn said in its FAQs.
The UK’s Information Commissioner’s Office (ICO) expressed concerns over the move, with the regulator noting that the opt-out approach wasn’t sufficient to protect user privacy.
Similarly, digital rights group Open Rights Group complained the opt-out model “proves once again to be wholly inadequate to protect our rights”.
“The public cannot be expected to monitor and chase every single online company that decides to use our data to train AI," said legal and policy officer Mariano delli Santi.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Opt-in consent isn't only legally mandated, but a common-sense requirement."
LinkedIn U-turn welcomed by privacy watchdog
LinkedIn has since backed down, however, and will no longer apply the policy in the UK, along with the EU, the European Economic Area, and Switzerland.
In a statement, Blake Lawit, SVP and general counsel at LinkedIn, said the company has changed its user agreement to include more details on its content recommendation and content moderation practices, along with new provisions relating to generative AI.
The privacy policy, meanwhile, now has more information on how user data is harnessed to develop products and services. This includes details on how user data is used to train AI models.
"We are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” Lawit added.
The ICO has welcomed the decision, noting in a statement that LinkedIn has taken on board key concerns raised about its approach to AI training.
"We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” said Stephen Almond, ICO executive director, regulatory risk.
"In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset."
The use of user data for training AIs has become a controversial one. Earlier this year, Meta halted the use of the data of UK Facebook and Instagram data for this after the ICO raised concerns.
The company has since started using UK data once again under an altered consent model, claiming it satisfied the ICO's demands.
The ICO said it will continue to monitor the situation with Meta - and the same, Almond said, will be true of LinkedIn.
“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected," he said.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
LinkedIn has become a prime hunting ground for cyber criminals – here’s what you need to know
News Cyber criminals are flocking to LinkedIn to conduct social engineering campaigns, research shows.
By Solomon Klappholz Published
-
LinkedIn faces lawsuit amid claims it shared users' private messages to train AI models
News LinkedIn faces a lawsuit in the US amid allegations that it shared Premium members' private messages to train AI models.
By Emma Woollacott Published
-
Hackers are using a LinkedIn recruitment scam to snare unsuspecting jobseekers
News Taking a leaf out of North Korean threat actors’ playbook, Iranian hackers are tricking jobseekers using fake job offers
By Solomon Klappholz Published
-
AI recruitment tools are still a privacy nightmare – here's how the ICO plans to crack down on misuse
News The ICO has issued guidance for recruiters and AI developers after finding that many are mishandling data
By Emma Woollacott Published
-
“You must do better”: Information Commissioner John Edwards calls on firms to beef up support for data breach victims
News Companies need to treat victims with swift, practical action, according to the ICO
By Emma Woollacott Published
-
LinkedIn fined €310 million for GDPR breaches
News The social networking platform has accepted the ruling and will implement changes to its ad tracking processes
By Emma Woollacott Published
-
UK's data protection watchdog deepens cooperation with National Crime Agency
News The two bodies want to improve the support given to organizations experiencing cyber attacks and ransomware recovery
By Emma Woollacott Published
-
ICO slams Electoral Commission over security failures
News The Electoral Commission has been reprimanded for poor security practices, including a failure to install security updates and weak password policies
By Emma Woollacott Published