LinkedIn lawsuit over AI model training withdrawn

LinkedIn logo and branding pictured on a smartphone screen held by shadowed handed.
(Image credit: Getty Images)

A lawsuit accusing LinkedIn of using customer data to train its AI models has been dropped.

Last month, a class action case was filed against the Microsoft-owned professional networking firm in a California federal court on behalf of LinkedIn user Alessandro De La Torre.

The suit accused the company of disclosing Premium customers' private messages to third parties to train generative AI models.

It sought compensation of $1,000 per Premium user for alleged violations of the US federal Stored Communications Act, along with an unspecified additional sum for breach of contract and the breach of California's Unfair Competition Law (UCL).

However, following disclosures from LinkedIn, De La Torre has filed a notice of dismissal without prejudice.

Sarah Wight, VP, legal - litigation, competition, and enforcement at LinkedIn, confirmed the scrapping of the case in a LinkedIn post.

"Sharing the good news that a baseless lawsuit against LinkedIn was withdrawn earlier today. It falsely alleged that LinkedIn shared private member messages with third parties for AI training purposes," she wrote.

"We never did that. It is important to always set the record straight."

What was LinkedIn accused of?

The lawsuit arose after LinkedIn changed its privacy practices last year, opting users in by default to allow third parties to use their personal data to train AI.

The change was made quietly and prompted a fierce backlash from both users and privacy campaigners at the time.

Customers were exempted from the data sharing in Canada, the EU, EEA, the UK, Switzerland, Hong Kong, and mainland China, but not in the US.

De La Torre has now accepted that the data did not include the content of private messages.

"LinkedIn’s belated disclosures here left consumers rightly concerned and confused about what was being used to train AI," Eli Wade-Scott, managing partner at Edelson PC, which represented De La Torre, told Reuters.

"Users can take comfort, at least, that LinkedIn has shown us evidence that it did not use their private messages to do that. We appreciate the professionalism of LinkedIn’s team."

Scraping data for AI training has become a recurring flashpoint for big tech in the last couple of years. Not only are there concerns about user privacy, there are also arguments over whether it constitutes fair use or copyright infringement.

Two years ago, Getty Images filed a lawsuit against Stability AI, claiming it had unlawfully scraped millions of Getty images to train its AI model, Stable Diffusion, without obtaining proper consent and a license.

RELATED WHITEPAPER

Meanwhile, OpenAI and Meta have both also been sued amid claims they used data from pirated books to train their AI models, and thereby infringing on authors' copyright.

Meta is still under close scrutiny from the Information Commissioner's Office (ICO) in the UK after promising to make it simpler for users to object to the processing of their data, and giving them a longer window to do so.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.