ChatGPT answers still can’t be trusted as EU regulators cite GDPR non-compliance
ChatGPT is still inaccurate when it comes to returning responses to users, EU lawmakers have warned


ChatGPT’s responses are still littered with inaccuracies and OpenAI has not done enough to remedy the issue to make the platform GDPR compliant, according to EU regulators.
According to a task force focused specifically on examining the use of ChatGPT, OpenAI’s flagship chatbot is problematic in that the content it generates is often likely to be taken as fact despite inaccurate answers to user queries.
“The outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy,” the task force said.
Lawmakers advised OpenAI to take the necessary steps to ensure that users understand that ChatGPT’s “generated text, although syntactically correct, may be biased or made up”.
Though OpenAI has taken some measures, according to the watchdog, it has not done enough to make the platform fully compliant with GDPR standards and principles.
“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle, as recalled above,” the task force said.
The ChatGPT task force was set up last year by the European Data Protection Board (EDPB) “to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities”.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The move followed enforcement action undertaken by the Italian data protection authority in April 2023 which saw the Italian government ban ChatGPT. Italy has since lifted its ban on the platform.
Accuracy is an ongoing concern in generative AI
CPO and CTO of xDesign Jeff Watkins told ITPro that accuracy is a difficult issue for generative AI platforms and large language models (LLMs).
Under GDPR, reasonable measures must be taken to remove inaccurate data, and regulators have grown concerned that some generative AI platforms are in breach of the legislation.
“If the model itself is generating inaccurate data due to hallucinations, trying to fix these errors is like playing whack-a-mole with a probability engine,” Watkins said.
ChatGPT has found itself susceptible to issues of inaccuracy, with the platform having been found to provide inaccurate answers to coding questions over 50% (52%) of the time in 2023.
RELATED WHITEPAPER
While OpenAI’s CEO has described inaccuracies and hallucinations as part of the “magic” of generative AI, there is still a clear need for many to consider the practical implications of inaccuracy, especially regarding GDPR.
While notable industry experts such as Dell CTO John Roese have stated recently that hallucinations are no longer much of an issue, platforms such as GitHub Copilot are still returning problematically erroneous code.
According to Watkins, generative AI developers will need to prove their credentials going forward when it comes to data accuracy, showing users “they’re improving on inaccuracies and providing sources”.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published
-
Elon Musk’s $97 billion flustered OpenAI – now it’s introducing rules to ward off future interest
News OpenAI is considering restructuring the board of its non-profit arm to ward off unwanted bids after Elon Musk offered $97.4bn for the company.
By Nicole Kobie Published
-
Sam Altman says ‘no thank you’ to Musk's $97bn bid for OpenAI
News OpenAI has rejected a $97.4 billion buyout bid by a consortium led by Elon Musk.
By Nicole Kobie Published
-
DeepSeek flips the script
ITPro Podcast The Chinese startup's efficiency gains could undermine compute demands from the biggest names in tech
By Rory Bathgate Published
-
SoftBank could take major stake in OpenAI
News Reports suggest the firm is planning to increase its stake in the ChatGPT maker
By Emma Woollacott Published