LinkedIn faces lawsuit amid claims it shared users' private messages to train AI models
The professional networking app described the allegations as "false claims with no merit"


In a US lawsuit filed on behalf of LinkedIn Premium users, the professional networking app has been accused of using members' private messages to train AI models.
Filed in a California federal court, the lawsuit, on behalf of LinkedIn user Alessandro De La Torre, accuses the company of breaching its contractual promises by disclosing Premium customers' private messages to third parties to train generative AI models.
"Given its role as a professional social media network, these communications include incredibly sensitive and potentially life-altering information about employment, intellectual property, compensation, and other personal matters," the filing reads.
"Microsoft is the parent company of LinkedIn, and Defendant claims it disclosed its users’ data to third-party 'affiliates' within its corporate structure, and in a separate instance, more cryptically to 'another provider'. LinkedIn did not have its Premium customers’ permission to do so."
In a statement given to ITPro, a spokesperson for LinkedIn said: “These are false claims with no merit.”
The story behind the LinkedIn lawsuit
The case hinges on the introduction of a change to LinkedIn's privacy practices last year, whereby users were opted in by default to allow third parties to use their personal data to train AI.
To start with, according to the lawsuit, this was done on the quiet, with the change only appearing in the company's privacy policy in September following a backlash from users and privacy campaigners.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The company exempted customers in Canada, the EU, EEA, the UK, Switzerland, Hong Kong, and Mainland China from the data sharing - but not those in the US.
"Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide)," the company said in its FAQs.
The change wasn't universally welcomed, however, with the UK’s Information Commissioner’s Office (ICO) noting that the opt-out approach wasn’t sufficient to protect user privacy.
RELATED WHITEPAPER
Digital rights campaigners Open Rights Group also complained the opt-out model “proves once again to be wholly inadequate” to protect user rights.
The lawsuit is asking for compensation of $1,000 per Premium user for alleged violations of the US federal Stored Communications Act, along with an unspecified additional sum for breach of contract and the breach of California's Unfair Competition Law (UCL).
It also calls for the company to delete all AI models trained using improperly collected data.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Geekom Mini IT13 Review
Reviews It may only be a mild update for the Mini IT13, but a more potent CPU has made a good mini PC just that little bit better
By Alun Taylor
-
Why AI researchers are turning to nature for inspiration
In-depth From ant colonies to neural networks, researchers are looking to nature to build more efficient, adaptable, and resilient systems
By David Howell
-
‘We are now a full-fledged powerhouse’: Two years on from its Series B round, Hack the Box targets further growth with AI-powered cyber training programs and new market opportunities
News Hack the Box has grown significantly in the last two years, and it shows no signs of slowing down
By Ross Kelly
-
Businesses are taking their eye off the ball with vulnerability patching
News Security leaders are overconfident in their organization’s security posture while allowing vulnerability patching to fall by the wayside.
By Jane McCallion
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
By Rory Bathgate
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz
-
Tech leaders worry AI innovation is outpacing governance
News Business execs have warned the current rate of AI innovation is outpacing governance practices.
By Emma Woollacott
-
LinkedIn has become a prime hunting ground for cyber criminals – here’s what you need to know
News Cyber criminals are flocking to LinkedIn to conduct social engineering campaigns, research shows.
By Solomon Klappholz