Why diversity in AI is critical to building a fairer society
To ensure AI is able to serve humanity as a whole, it needs to be built by humanity as a whole
In 2013, Wisconsin resident Eric Loomis was found guilty over his involvement in a drive-by shooting and was sentenced to six years in prison and five years of extended supervision.
The sentence handed down was, in part, informed by an algorithm, something that has caused many to question whether we should be using such technology in a process that has always been strictly human-led. Of the thousands of cases referred to the US Supreme Court each year, this one would prove to be one of the most groundbreaking.
The algorithm, known as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), acts as a risk assessment tool for gauging reoffending. The United States imprisons more people per capita than any other country, and there is increasing pressure on authorities to lower the prison population. COMPAS is meant to remedy that by only locking up the criminals deemed most destructive to society.
The program uses a questionnaire to assign a score between 1-10, which represents the likelihood each offender will commit another crime. Those who are considered high risk - like Loomis - are more likely to receive harsher sentences. At its core, the program is meant to expedite the court process. Indeed, Equivant (formerly known as Northpointe), the company behind COMPAS, declares on its website: "If you're ready to make your work and life easier with Equivant, request a demo".
Wisconsin's supreme court declined to adjudicate on Loomis' case in June 2017
The algorithm COMPAS uses, however, is developed in secret, and has faced accusations of being unfair and biased towards certain demographics. A ProPublica investigation in 2016 found that COMPAS wrongly labelled black defendants as being twice as likely to re-offend than their white counterparts, leading to far harsher sentencing as a result. It's important to remember that defendants have no say in whether this technology is used.
Loomis' isn't the only case in which artificial intelligence has been used to support determinations in the justice system - in fact, we are using it here in the UK. Durham Constabulary's HART algorithm, which is meant to inform police officers as to whether or not a defendant should be referred to a rehabilitation programme, recently came under scrutiny after it was accused of discriminating against the poor.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
AI's prevalence throughout the criminal justice system is indicative both of its increasing pervasiveness throughout society as a whole as well as the potential problems it could cause. The issue we're facing, however, is that the technology isn't fully understood.
It's becoming clear that when AI isn't designed with everyone in mind, there are real-life ramifications. Just recently, Amazon abandoned a recruiting tool that was built by feeding hundreds of past successful CVs into an algorithm. As most of these were from male applicants, the tool incorrectly assumed that women were unsuitable for roles at the company. Similarly, an MIT study found that three of the newest gender-recognition AIs could only correctly recognize dark-skinned women's gender 35% of the time.
In the aggressive pursuit of greater efficiency, AI may seem like a silver bullet, but what's clear is that a lack of diversity in the development of the tech is starting to have unforeseen effects once applied to the real world.
(Lack of) Diversity in AI
It's no secret that the computer science industry suffers from a lack of diversity, especially at top-level positions. However, when we talk about diversity in AI, we don't just mean providing opportunities of employment to a wider demographic; it's something that is fundamental if we're to see a system effectively replace the input of a human.
As AI systems 'learn' from some initial human input, they are susceptible to their creators' biases. Machines are only able to learn from the data set they're given. As end users of the tech, we assume that the data being used is representative, but if the examples mentioned show us anything, it's that data clearly falls short at times.
According to Zoe Webster, director of AI and Data Economy at Innovate UK, AI must be able to appropriately serve the needs of all types of people, not just a few, in order to serve a useful purpose in society.
Amazon's AI recruitment tool filtered out female applicants based on their success rate in the past
"AI, and in part machine learning, depends on the data you give it," says Webster. "If your data is garbage, you'll get garbage out. For example, [in the case of Amazon's HR tool] what came out was a classification system that was heavily biased against women. While that data wasn't garbage, it was biased, and that's a problem."
In essence, if there isn't diversity at the production stage, it's likely that the AI won't receive a diverse enough data set to make decisions - and that's not likely to change anytime soon. According to AI Index's 2018 annual report, men make up 71% of the applicant pool for AI jobs in the US, as well as 80% of AI instructors worldwide.
So, what's keeping women and people of colour out of AI? Webster suggested that societal and cultural expectations might prevent women from seeing themselves within the AI industry.
"There's this perception of a geek who spends time holed in their room, and I'm not sure that's appealing for many women," adds Webster.This hacker stereotype highlights another metaphorical thorn in AI's side: a persistent labour shortage. The tech industry is growing so quickly that there are barely enough workers to keep up, let alone establish a diverse work culture. Diversity is a difficult sell when vacancies are going unfilled across the industry.
"One challenge is that teams are rarely large enough to include a representative sample of individuals," says Allison Chaney, board member of the Women in Machine Learning advocacy group. "But if the field more broadly represents the population, then we will become more aware of ways things can go wrong for subpopulations, just as designers are typically now more aware of how colour palettes impact colourblind individuals."
Matt Scherer, an employment lawyer who specialises in technology policy, believes that while a diverse workplace culture is beneficial in any industry, it's especially important in AI.
"The more diverse the developers of an AI system are, the better they will be able to 'teach' that system to operate effectively in a wide variety of settings," says Scherer. "Relatedly, the way that AI systems 'learn' is through data. For that reason, it's also important for AI developers to make an effort to acquire diverse and representative data sets, particularly where an AI system is being designed to interact with or make decisions about human beings."
AI still only works for the few
AI is arguably being designed to make the lives of humans much easier. However, when AI is restricted to only helping a small section of society, it serves to actively work against everyone else. In a future where AI is at the cornerstone of society, biased machines would create a massive opportunity gap for marginalised groups.
A simple example of this in action is the case of a South Korean woman who, in 2015, had her hair "eaten" by her Roomba as she slept on the floor.The machine itself was functioning properly and was perfectly capable of detecting potentially dangerous items, but the data used to build its algorithm never accounted for the fact that for many cultures it's common for people to sleep on the floor. It simply wasn't something that the American team behind the robot had considered.
"Having a diverse and inclusive workplace reduces the risk of mishaps like that," explains Scherer, "because the perspectives and life experiences of a wider variety of people will then be part of the design and training process."
Organisations like Black in AI and Women in Machine Learning work to create such workplaces. Both groups hold workshops to help their members improve their professional skills while simultaneously providing a community of support.
Chaney, who has been a member of WiML since she was a junior grad student, says the organisation helped her gain experience while building confidence in her research.
"Conferences can be brutal for junior grad students if you have one or two people who come up to your poster and grill you with difficult technical questions," explains Chaney. "WiML doesn't have that environment, so it gives junior students a more friendly and constructive place to present a poster for the first time - instead of coming away worried that they can't make it in the field, they can go home thinking about their actual research."
"WiML gave me access to senior women and colleagues at my own stage," adds Chaney. "Having a network with both of these groups has been a been a boon for me in many ways - from helping my research directly to getting good advice while making career choices."
How to fix it
People both in and outside the industry are aware that the lack of diversity is a problem, and there's a desire to improve. However, Scherer warns that simply recruiting underrepresented people isn't enough. In order to eliminate the cyclical bias that haunts AI, workplaces need to have a culture of inclusion as well.
"Inclusion means creating a workplace environment and culture where everyone feels comfortable speaking up and providing input," says Scherer. "Seeking diversity without inclusion is a recipe for failure, because even if a company successfully recruits more people from underrepresented groups, a non-inclusive workplace often means that those new workers never truly feel like they are part of the team."
Such an environment is especially important to AI as developers often work in teams, much like most other software-based environments. Programmers, user experts and legal minds collaborate together while developing a product, but if all of these people are contributing similar viewpoints, the AI they create will only have a limited data set to reference from. Employees need to feel like their voices can and should be heard.
OpenAI is working to build an inclusive environment for AI development
Open AI, a nonprofit research company, is one of the many industry groups that has taken steps to create a more inclusive workplace environment. Its OpenAI Scholars Initiative provides stipends and mentorship to underrepresented individuals in AI. It's different than a typical fellowship in that the scholars can work from anywhere; the program recognises that not everyone can afford to pack up and move to San Francisco and put their lives on hold for a few months.
For three months, OpenAI Scholars study deep learning and complete a project. Past projects include a music commentary and emotional landscape generators. Scholars have gone on to work, study and teach in machine learning.
Programs like these mean there's a reason to be optimistic - the AI Index report found that WiML workshop attendance increased by 600% in the past year. However, that doesn't mean there's no more work left to be done. COMPAS is still being used across the US, and news about biased AI systems seems to hit the headlines every week.
In order to ensure AI is able to serve humanity as a whole, it needs to be built by humanity as a whole.