Industry "delighted" with UK's 'landmark' anti-bias AI standard
The algorithmic transparency standard will now enter a pilot scheme to be scrutinised to a greater degree
The UK government has developed an algorithmic transparency standard that aims to eliminate biases in artificial intelligence (AI) applications.
The standard was built by the Cabinet Office's Central Digital and Data Office (CDDO), along with the Centre for Data Ethics and Innovation (CDEI), and will now go through a pilot scheme to identify weaknesses to improve it further.
The government said the UK is now one of the first countries in the world to build such a standard which aims to make it easier to scrutinise AI models' algorithmic decision-making and ultimately build public trust in the technology.
For years, there have been calls from many corners of the technology industry for transparency in AI's algorithmic decision-making including from the likes of the Alan Turing Institute and Ada Lovelace Institute.
The news has been warmly welcomed and there are hopes the standard may influence AI in the private sector too.
"Organisations are increasingly turning to algorithms to automate or support decision-making," said Adrian Weller, programme director for AI at The Alan Turing Institute and member of the Centre for Data Ethics and Innovation’s Advisory Board. "We have a window of opportunity to put the right governance mechanisms in place as adoption increases.
"This is why I’m delighted to see the UK government publish one of the world’s first national algorithmic transparency standards," he added. "This is a pioneering move by the UK government, which will not only help to build appropriate trust in the use of algorithmic decision-making by the public sector, but will also act as a lever to raise transparency standards in the private sector."
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The development of the standard follows a CDEI review into biases in algorithmic decision-making which recommended the UK government to mandate a transparency obligation on public sector organisations with respect to decisions made on individuals.
Several government departments will be involved in the pilot scheme over the coming months before the CDDO asks for formal endorsement from the Data Standards Authority later in 2022.
This move delivers on commitments made in the National AI Strategy and National Data Strategy, the government said, and it believes the move will strengthen the UK's global leadership on AI governance and trustworthy AI.
The government will be publishing data from pilot for the public and experts to engage with, hoping additional oversight will increase external scrutiny and lead to greater transparency.
"We need democratic standards and good governance for new technologies, such as AI, that will enhance the way we work and benefit society," said Sir Patrick Vallance, chief scientific adviser and national technology adviser at the UK government.
"The launch of this new standard demonstrates this government’s commitment to building public trust and understanding of the application of these technologies, including exploring increased transparency in public sector use of algorithms."
Last year the Home Office scrapped an AI tool it used in the immigration process after it was accused of being racially biased, creating a "hostile environment" in the visa application process.
Algorithmic bias infamously came to the fore in the private sector when Apple announced its Apple Card in 2019 when a number of users claimed they were unfairly offered lower credit limits than others.
Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.