World Economic Forum lambasts AI bias
Ethical questions about artificial intelligence are being raised by “obvious problems” with biased algorithms
A lack of diversity in the tech industry is raising serious questions about the future of artificial intelligence, according to the head of AI and machine learning at the World Economic Forum.
Speaking at an event in Tianjin, China, Kay Firth-Butterfield flagged the issue of bias within AI algorithms and called on the need to make the industry "much more diverse" in the West.
"There have been some obvious problems with AI algorithms," she told CNBC, mentioning a case that occurred in 2015, when Google's image-recognition software labelled a black man and his friend as gorillas'. According to a report published earlier this year by Wired, Google has yet to properly fix this issue - opting instead to simply block search terms for primates.
"As we've seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater," said Firth-Butterfield. She also noted the rollout of General Data Protection Regulation (GDPR) in Europe, claiming this has brought ethical questions about data and technology "to the fore".
The dominance of "white men of a certain age" in building technology was signalled as a root cause for bias creeping into the algorithms behind AI. Training machine-learning systems on racially uneven data sets has previously been noted as a problem, particularly within facial-recognition software.
An experiment undertaken earlier this year at the Massachusetts Institute of Technology (MIT), for example, involved testing three commercially available face-recognition systems, developed by Microsoft, IBM and the Chinese firm Megvii. The results found that the systems correctly identified the gender of white men 99% of the time, but this success rate plummeted to 35% for black women.
Dr Adrian Weller, programme director for artificial intelligence at The Alan Turing Institute, told IT Pro: "Algorithmic systems are increasingly used in ways that can directly impact our lives, such as in making decisions about loans, hiring or even criminal sentencing. There is an urgent need to ensure that these systems treat all people fairly - they must not discriminate inappropriately against any individual or subgroup.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"This is a particular concern when machine learning methods are used to train systems on past human decisions which may reflect historic prejudice."
Weller noted that a growing body of work is addressing the challenge of making algorithms fair, transparent and ethical. This outlook is similar to that of Firth-Butterfield, who emphasised that the World Economic Forum is trying to ensure AI grows "for the benefit of humanity".
Human diversity might not be the only issue facing AI bias. A recent study by Cardiff University and MIT found that groups of autonomous machines can demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.