Nearly 50 million Europcar customer records put up for sale on the dark web – or were they?
Europcar denies alleged breach, claiming the exfiltrated data was fabricated. Experts are now arguing over whether AI is to blame.


A database containing nearly 50 million customer records reportedly stolen from car rental service Europcar has been put up for sale on a hacking forum, but questions have been raised over the data’s authenticity.
If legitimate, the leak would be one of the largest data breaches in recent years, and the nature of the stolen information would have exposed customers to a wide range of attacks.
As pointed out by Reddit’s head of security Matt Johansen on X (formerly Twitter), car rental companies require customers to hand over a lot of personal identifiable information (PII), including passports and driver’s licenses, which are much harder to rotate than exposed passwords.
Europcar, however, has said the database is fake. It said the records included in a sample of the data do not match those it has on file, with none of the leaked email address records corresponding to those in its own database.
In addition, Europcar speculated that the data may have been synthesized using generative AI, pointing to a series of discrepancies that look like hallucinations.
For example, the data includes non-existent addresses, ZIP codes that don’t match addresses, and both first and last names that do not correspond to those used in email addresses.
At the time of writing, the authenticity of the data has not been verified, but Huseyin Can Yuceel, security researcher at Picus Security, appeared confident the data was created using generative AI tools.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Read more
“The Europcar security incident unfolded like a classic Scooby Doo unmasking. In the space of a couple of hours, the infosec community went from analyzing the impact of one of the biggest data breaches of all time, to exposing an AI powered hoax.”
Can Yuceel argued this incident represents a novel attack vector being exploited by threat actors, using generative AI to create fake datasets that can be used to deceive and extort businesses.
“A far cry from initial reports of a data breach involving 50 million customers, this incident should be classified as an attempted social engineering attack. In social engineering attacks, it’s common for adversaries to manipulate their victims into sharing confidential information or executing malware to compromise the target system” he explained.
“In this case, it seems as though attackers tried to create panic and pressure their target into paying ransom for a false claim that they stole sensitive customer data.”
Questioning the role AI in fabricating data for extortion attempts
This debacle highlights AI’s offensive potential, according to Can Yuceel, who said businesses should take note of the extortion technique and adjust their incident response procedures accordingly.
“Adversaries are quick to adopt new techniques and tools, and the use of AI in cyber-attacks is becoming more commonplace. We, as defenders, should expect more AI-powered cyber-attacks in the near future.”
This interpretation has been challenged by other security experts, however, arguing the role of AI in fabricating the stolen data is not clear and could have been executed using legacy techniques.
Troy Hunt, founder and CEO at data breach site Have I Been Pwned, warned against jumping to the conclusion that AI was integral to this attack, citing his previous work on generating dummy data using software company Red Gate’s SQL data generator technology.
RELATED RESOURCE
Discover how you can harness generative AI's full potential
Hunt noted many of the email addresses were not synthesized, but lifted from records exposed in previous data breaches, arguing this confirms AI was definitely not involved in generating the leaked email addresses.
Regardless of AI’s role in the attack, the recommendations for businesses to protect themselves against falling for fabricated data breaches remains the same: always compare stolen records with internal databases to confirm the veracity of the breach before acting.
As such, Can Yuceel praised Europcar’s response to the incident by cross-checking the stolen data with that in their internal database.
“It appears that Europcar did its due diligence and followed the incident response best practices by confirming whether these claims were true. After analyzing the claim, they found that the data was fake and confirmed there was no breach.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Should AI PCs be part of your next hardware refresh?
AI PCs are fast becoming a business staple and a surefire way to future-proof your business
By Bobby Hellard
-
Westcon-Comstor and Vectra AI launch brace of new channel initiatives
News Westcon-Comstor and Vectra AI have announced the launch of two new channel growth initiatives focused on the managed security service provider (MSSP) space and AWS Marketplace.
By Daniel Todd
-
Cyber attacks against UK firms dropped by 10% last year, but experts say don't get complacent
News More than four-in-ten UK businesses were hit by a cyber attack last year, marking a decrease on the year prior – but security experts have warned enterprises to still remain vigilant.
By Emma Woollacott
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
By Rory Bathgate
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion
-
Law enforcement needs to fight fire with fire on AI threats
News UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
By Emma Woollacott
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
By Jane McCallion
-
Hackers are turning to AI tools to reverse engineer millions of apps – and it’s causing havoc for security professionals
News A marked surge in attacks on client-side apps could be due to the growing use of AI tools among cyber criminals, according to new research.
By Emma Woollacott
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz