The Apple Card was meant to be a modern way of managing money, but it’s offered a lesson in machine learning and algorithms instead.
The Apple Card launched in the US in summer 2019, backed by bank Goldman Sachs. In November, a thread went viral on Twitter, with a customer claiming he’d been offered 20 times more credit by Apple than his wife, despite their finances being identical. The couple were told by customer service that the difference was the fault of the algorithm, but as the customer in question was David Heinemeier Hansson, the creator of Ruby on Rails and founder of Basecamp, he didn’t buy it.
Instead, he took to Twitter. “Apple Card is a sexist program,” he posted. “It does not matter what the intent of individual Apple reps are, it matters what the algorithm they’ve placed their complete faith in does. And what it does is discriminate.” Awkwardly, his story was echoed by Apple cofounder Steve Wozniak, who said his wife faced a similar issue.
The ensuing outcry sparked an investigation by the New York State Department of Financial Services and led Goldman Sachs, which runs that particular aspect of the system, to say it would assess any applications that people thought were unfair. But the company flatly denied any allegation of bias. “We have not and never will make decisions based on factors like gender,” said Carey Halio, the CEO of Goldman Sachs Bank USA.
But that misunderstands what Goldman Sachs, and its partner Apple, is being accused of – no one, not even Heinemeier Hansson at his angriest, was arguing that the system was biased on purpose. Instead, the accusation is that bias is unknowingly built into the algorithm, which even Goldman Sachs’ customer service staff treat like a black box.
Bias in algorithms
Sandra Wachter is a senior research fellow at the Oxford Internet Institute and the Turing Institute. While she doesn’t know exactly what data the system is using – which is rather the problem – she isn’t surprised that this problem has cropped up. “We know that whenever algorithmic systems are being used, that this could potentially lead to biased decision-making – and that’s a known problem that we have to deal with going forward if we use those systems to make important decisions about us. This could be loan decisions, employment, those kinds of things.”
That’s particularly problematic when machine learning comes into play, as we don’t know how it correlates data points to make decisions: you can ask a human loan officer why he or she made a decision, but that’s harder with a machine.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Wachter gives an example of dog ownership. A credit algorithm could spot that people with dogs for pets seem to be better at repaying loans; from there on out, it starts looking to see if applicants have a dog – that’s not inherently biased on gender, ethnicity, orientation and so on, but it could have hidden connections that we aren’t aware of.
“For example, if it’s in London, where it’s harder to buy a house so more people rent, landlords won’t let you have pets in your flat,” she said. “Automatically, you’re starting to discriminate against people, even though you don’t want to and don’t know about it.” Spotting that isn’t easy, Wachter explains, and we can’t expect programmers or developers to understand where inequalities exist and how not to trip up on them.
Credit where it’s due
It’s worth noting this isn’t a problem only hitting the Apple Card, but a wider issue for the financial system sparked by credit history that reflects societal issues. Men may be offered more credit because, on average, they earn higher wages – fix the gender pay gap, and that bias will no longer be reflected in financial decision-making algorithms.
“But the problem is AI algorithms are rule-bound – that is, their makers take as given a fixed position, often based on stereotyped thinking rather than recognising there are also many cases where women earn the same as, or even more than men,” said Kathleen Richardson, professor of ethics and culture of robots and AI at De Montfort University. “If the woman was assessed on individual criteria, it would mean these facts could come to light in the issuing of credit.” In this case, Goldman Sachs says gender was not directly considered by the algorithm; though, as Wachter notes, it may be indirectly causing bias via another data point.
While high-profile tech leaders such as Wozniak and Heinemeier Hansson can use their influence to turn the spotlight on such issues, most of us don’t have the power to challenge such bias. That’s a problem if women can’t get as much credit as men, notes Richardson.
At the same time that algorithms are pulling in data points that are damaging to women, they ignore facts that may encourage lending to them. Richardson points to studies on the Grameen bank, which offers microloans to women in developing countries. “Researchers found that women were more likely to reinvest their profits in communities and their families than on themselves,” she said. “So, from a social point of view, evidence shows that more women in business is good for society – and less individualistic.” But algorithms only consider if the loan will be repaid in time, not what good might come from the money in the meantime.
What can be done?
One tool to test such systems is counterfactual: in this case, laying out exactly what an application needs to approve someone for each level of credit. Should that be handed to would-be customers? Wachter says there are arguments on both sides: too much transparency could be a competitive disadvantage for a firm or let customers game a system, but, with issues such as those faced by Goldman Sachs, it’s clear transparency and trust is required. “I think counterfactual explanations could be a feasible middle ground because it just tells you about your own factors that had an impact on decisions, it doesn’t tell you everything about the source code,” she said.
Transparency is necessary, because it’s harder to see when you’re being discriminated against in the digital world. If your coworker makes a sexist remark, or you see white colleagues progressing while your career stagnates, you’ll spot it. But if you’re not seeing job ads because of a specific characteristic, you don’t know what you’re missing out on. Wachter is part of a project developing “rights to reasonable inferences” for financial services, coming up with standards to guide companies using algorithmic machine-learning tools.
Indeed, she notes, it’s not as though companies aren’t aware of these challenges or are avoiding addressing them. “The bias problem is really hard – I think everyone is really grappling with it,” she said. “It’s technically, ethically, politically complicated.” In other words, the reason even Apple and Goldman Sachs face such problems with algorithms behaving badly is because there’s no simple solution.