Why transparency is key to promoting trust in artificial intelligence
Breaking open the black box with explainable AI is the first step in making the technology fairer for all
Artificial intelligence (AI) is inescapable. In our daily lives we probably encounter it and its best friend machine learning much more frequently than we think. Did you buy something online yesterday, use face login on your smartphone, check your Facebook, look for something on Google, or use Google Maps? AI was right there.
When AI is helping us find the most efficient route home, we’re often quite happy to let it do its job. But this technology already does so much more, from helping to decide whether to grant us bank loans and diagnose our illnesses, to presenting targeted advertising.
A question of trust
As AI gets more and more embedded in our lives and helps make decisions that are increasingly significant to us, we’re rightly concerned about transparency. When big new stories like the Cambridge Analytica scandal or ongoing discussion around inherent biases in facial recognition hit the headlines, we are concerned about bias (intentional or otherwise), and our trust in AI takes a hit.
Explainable AI gives us a route to greater trust in AI. It is designed to help us learn more about how AI works in any given situation. So, instead of the AI just giving us an answer to a question, it shows us how it got to the answer. The alternative is the so-called ‘black box’ situation – where an AI uses an unspecified range of information and algorithms to get to an answer, but doesn’t make any of this transparent.
In theory, explainable AI gives us confidence in the conclusions an AI system draws. Dr Terence Tse, Associate Professor of Finance at ESCP Business School, gives the following example: “Imagine you want to obtain a loan and the approval is purely determined by an algorithm. Your loan gets rejected. If the algorithm in question is a black box it’s an issue for all parties. The bank cannot say why this is happening, and you don't know what to do in order to obtain the loan. Having explainable AI will help.”
Shedding light on competence
Explainable AI is a vital aspect of understanding an AI’s competence in coming up with any particular set of outputs. Mark Stefik, Research Fellow and Lead of Explainable AI at PARC, a Xerox company, tells IT Pro: “Typically, when people interact with AIs and the systems do the right thing, then people overestimate the AI’s competence. They assume that the machines think like people, which they do not. They assume that machines have common sense, which they do not.”
In fact, AI does not ‘think’ like humans do at all. We use ‘think’ in relation to AI to describe a way of working that in reality is different to that of our own brains. AI uses algorithms and machine learning to help it draw conclusions from data it is given, or from insights it generates. In showing how an AI has reached its decision, explainable AI can help uncover biases and in doing so not only provide individuals with redress, as in the banking example above, but also help refine the AI system itself.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Oleg Rogynskyy, Founder and CEO of People.ai says: “A lack of explainability on how the machine learning model thinks can result in biases. If there is a bias hidden in the data set a machine learning model is trained on, it will consider the bias a ground truth.
“Explainability techniques can be used to detect and then remove biases and ensure a level of trust between the machines and the user.”
Making explainable AI ubiquitous
As AI takes an increasingly important role in our everyday lives, we are getting more and more concerned about whether we can trust it. As Stefik puts it: “The need for explainable AI increases if we want to use the systems in critical situations, where there are real consequences for good and bad decisions. People want to know when they can trust the systems before they rely on them.”
The IT Pro Podcast: Looking forward to 2020
With 2019 behind us, we predict what trends the IT industry can expect over the next year
The industry recognises this need. In a recent IBM survey of 4,500 IT decision makers, 83% of respondents said being able to explain how AI arrived at a decision was universally important. That number rose to 92% among those already deploying AI, as opposed to 75% of those considering a deployment.
Rogynskyy is unequivocal in his message, saying: “Explainable AI must be prevalent everywhere.” Tse was similarly forthright, adding: “If we want to gain public trust in the deployment of AI, we have to make explainable AI a priority.”
Stefik, however, has reservations, particularly when it comes to how we define terms like ‘trust’ and ‘explainable’, which he argues are nuanced and complex concepts. Nevertheless, he hasn’t written explainable AI off completely, saying: “It is not ready as a complete (or well-defined) approach to making trustworthy systems, but it will be part of the solution.”
Sandra Vogel is a freelance journalist with decades of experience in long-form and explainer content, research papers, case studies, white papers, blogs, books, and hardware reviews. She has contributed to ZDNet, national newspapers and many of the best known technology web sites.
At ITPro, Sandra has contributed articles on artificial intelligence (AI), measures that can be taken to cope with inflation, the telecoms industry, risk management, and C-suite strategies. In the past, Sandra also contributed handset reviews for ITPro and has written for the brand for more than 13 years in total.