AI recruitment tools are still a privacy nightmare – here's how the ICO plans to crack down on misuse

Concept image showing a robotic hand dropping a man in a suit into the garbage, signifying AI recruitment tools.
(Image credit: Getty Images)

The UK's Information Commissioner’s Office (ICO) has issued guidance on the use of AI recruitment tools following a wide-ranging review.

With AI increasingly being used to source potential candidates, summarize CVs, and score applicants, the ICO said it's become concerned that their use can cause problems for job applicants in terms of their privacy and information rights.

Some AI tools were not processing personal information fairly, for example, by allowing recruiters to filter out candidates with certain protected characteristics.

Others were even inferring characteristics such as gender and ethnicity from a candidate’s name, rather than asking for the information.

Meanwhile, some AI recruitment tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge.

The ICO has now made nearly 300 recommendations, such as ensuring personal information is processed fairly and kept to a minimum, and clearly explaining to candidates how their information will be used by the AI tool.

"AI can bring real benefits to the hiring process, but it also introduces new risks that may cause harm to jobseekers if it is not used lawfully and fairly. Our intervention has led to positive changes by the providers of these AI tools to ensure they are respecting people’s information rights," said Ian Hulme, ICO director of assurance.

"Our report signals our expectations for the use of AI in recruitment, and we're calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers."

Some of the most important measures, said the ICO, include completing a Data Protection Impact Assessment (DPIA), to understand, address and mitigate any potential privacy risks or harms.

Organizations must also identify an appropriate lawful basis for collecting data, such as consent or legitimate interests. Special category data, such as racial, ethnic origin or health data involves extra conditions.

Recruiters must identify who is the controller and processor of personal information and set explicit and comprehensive written instructions for providers to follow, and check that the provider has mitigated bias.

Candidates should be told how an AI tool will process their personal information, and recruiters must ensure that the tool collects only the minimum amount of personal information required to achieve its purpose, and that it won't be used in any other ways.

The developers concerned have all accepted or partially accepted all the ICO's recommendations.

“We are actively working to implement the specific actions agreed with the ICO in our audit plan," said one. "For example, we are making sure to provide the relevant information regarding the use of AI in our privacy policies and evaluating the steps taken to minimize bias when training and testing our AI tools."

TOPICS
Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.