LinkedIn backtracks on AI training rules after user backlash

LinkedIn logo and branding pictured on a smartphone screen held by shadowed handed.
(Image credit: Getty Images)

LinkedIn has suspended the use of UK user data for AI training following a fierce backlash from digital rights campaigners, users, and regulators.

The social network recently changed its privacy policy to use an ‘opt-out’ setting for the use of customer data to train its internal AI models.

In an update to its policies, the Microsoft-owned firm said the use of user data would help improve generative AI features. The change prompted a backlash among users on social media, with digital rights campaigners urging users to opt-out of the scheme.

"Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide)," LinkedIn said in its FAQs.

The UK’s Information Commissioner’s Office (ICO) expressed concerns over the move, with the regulator noting that the opt-out approach wasn’t sufficient to protect user privacy.

Similarly, digital rights group Open Rights Group complained the opt-out model “proves once again to be wholly inadequate to protect our rights”.

“The public cannot be expected to monitor and chase every single online company that decides to use our data to train AI," said legal and policy officer Mariano delli Santi.

"Opt-in consent isn't only legally mandated, but a common-sense requirement."

LinkedIn U-turn welcomed by privacy watchdog

LinkedIn has since backed down, however, and will no longer apply the policy in the UK, along with the EU, the European Economic Area, and Switzerland.

In a statement, Blake Lawit, SVP and general counsel at LinkedIn, said the company has changed its user agreement to include more details on its content recommendation and content moderation practices, along with new provisions relating to generative AI.

The privacy policy, meanwhile, now has more information on how user data is harnessed to develop products and services. This includes details on how user data is used to train AI models.

"We are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” Lawit added.

The ICO has welcomed the decision, noting in a statement that LinkedIn has taken on board key concerns raised about its approach to AI training.

"We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” said Stephen Almond, ICO executive director, regulatory risk.

"In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset."

The use of user data for training AIs has become a controversial one. Earlier this year, Meta halted the use of the data of UK Facebook and Instagram data for this after the ICO raised concerns.

The company has since started using UK data once again under an altered consent model, claiming it satisfied the ICO's demands.

The ICO said it will continue to monitor the situation with Meta - and the same, Almond said, will be true of LinkedIn.

“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected," he said.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.

TOPICS