AI recruitment tools are still a privacy nightmare – here's how the ICO plans to crack down on misuse
The ICO has issued guidance for recruiters and AI developers after finding that many are mishandling data


The UK's Information Commissioner’s Office (ICO) has issued guidance on the use of AI recruitment tools following a wide-ranging review.
With AI increasingly being used to source potential candidates, summarize CVs, and score applicants, the ICO said it's become concerned that their use can cause problems for job applicants in terms of their privacy and information rights.
Some AI tools were not processing personal information fairly, for example, by allowing recruiters to filter out candidates with certain protected characteristics.
Others were even inferring characteristics such as gender and ethnicity from a candidate’s name, rather than asking for the information.
Meanwhile, some AI recruitment tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge.
The ICO has now made nearly 300 recommendations, such as ensuring personal information is processed fairly and kept to a minimum, and clearly explaining to candidates how their information will be used by the AI tool.
"AI can bring real benefits to the hiring process, but it also introduces new risks that may cause harm to jobseekers if it is not used lawfully and fairly. Our intervention has led to positive changes by the providers of these AI tools to ensure they are respecting people’s information rights," said Ian Hulme, ICO director of assurance.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Our report signals our expectations for the use of AI in recruitment, and we're calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers."
Some of the most important measures, said the ICO, include completing a Data Protection Impact Assessment (DPIA), to understand, address and mitigate any potential privacy risks or harms.
Organizations must also identify an appropriate lawful basis for collecting data, such as consent or legitimate interests. Special category data, such as racial, ethnic origin or health data involves extra conditions.
Recruiters must identify who is the controller and processor of personal information and set explicit and comprehensive written instructions for providers to follow, and check that the provider has mitigated bias.
Candidates should be told how an AI tool will process their personal information, and recruiters must ensure that the tool collects only the minimum amount of personal information required to achieve its purpose, and that it won't be used in any other ways.
The developers concerned have all accepted or partially accepted all the ICO's recommendations.
“We are actively working to implement the specific actions agreed with the ICO in our audit plan," said one. "For example, we are making sure to provide the relevant information regarding the use of AI in our privacy policies and evaluating the steps taken to minimize bias when training and testing our AI tools."
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
“You must do better”: Information Commissioner John Edwards calls on firms to beef up support for data breach victims
News Companies need to treat victims with swift, practical action, according to the ICO
By Emma Woollacott Published
-
LinkedIn backtracks on AI training rules after user backlash
News UK-based LinkedIn users will now get the same protections as those elsewhere in Europe
By Emma Woollacott Published
-
UK's data protection watchdog deepens cooperation with National Crime Agency
News The two bodies want to improve the support given to organizations experiencing cyber attacks and ransomware recovery
By Emma Woollacott Published
-
ICO slams Electoral Commission over security failures
News The Electoral Commission has been reprimanded for poor security practices, including a failure to install security updates and weak password policies
By Emma Woollacott Published
-
Disgruntled ex-employees are using ‘weaponized’ data subject access requests to pester firms
News Some disgruntled staff are using DSARs as a means to pressure former employers into a financial settlement
By Emma Woollacott Published
-
ICO reprimands Coventry school over repeated data protection failures
News The ICO said the academy trust failed to follow previous guidance, which caused a serious data breach
By Emma Woollacott Published
-
ICO dishes out fine to HelloFresh for marketing spam campaign
News HelloFresh failed to offer proper opt-outs, the ICO said, and customers weren’t warned their data would be used for months after they cancelled
By Emma Woollacott Published
-
ICO fines topped $14 million in 2023 amid crackdown by regulator on data protection standards
News ICO fines across 2023 exceeded £14 million, with TikTok among the worst-hit for data protection violations
By Emma Woollacott Published