Do smart devices make us less intelligent?
Smart assistants offer instant responses to our questions, but do they create a false sense of certainty? And what happens when they don’t know the answer?
Benjamin Franklin wrote “nothing can be said to be certain, except death and taxes”. Fast forward to today, and the answers to life’s endless questions are a few keystrokes away or a shouted request to Alexa. Lost? Your smartphone can tell you where you are and get you from A to B. Need to check the weather? Ask Google and you’ll get an instant, local forecast. Want to know what’s happening in the world? Log in to Facebook or Twitter for a personalised stream of news.
Yet, few of us stop to question the truth behind what our devices tell us, or as German psychologist Gerd Gigerenzer puts it: “When the soothsayers work with computer algorithms rather than tarot cards, we take their predictions seriously.”
Some say this illusion of certainty makes us less resilient to life’s uncertainties, while others are embracing uncertainty to make the algorithms behind the answers smarter still.
The illusion of certainty
For Generation Z, who were born into the information age, technology is the first port of call for information. It’s creating a sense of dependence, according to Dominique Thompson, a GP, author and TEDx speaker who specialises in working with young people and mental health. “Their first response to any question, be it their homework or something from a TV show, is to Google it.”
Yet many are lacking the skills to question where the information comes from. “You can go online and search for anything; the illusion is the first thing that pops up is somehow definitive, but people aren’t looking to see who created it; they’re not going under the covers,” says Frank Gillett, vice president and principal analyst for Forrester Research. “It concerns me when my kids say they’ve learned something from the internet or Instagram. My daughter commented most of her news in the last day or two came from Instagram. That’s not a good source of news. Social media feeds, especially Facebook, are filtering what they show you based on how you act, so it’s a biased response,” he added.
Social media networks create a filter bubble that reinforces our values and beliefs, which creates a false sense of security. “Technology allows us to reduce uncertainty and control our world,” says Ellen Henriksen, clinical psychologist and founding host of the podcast Savvy Psychologist. “We can stay immersed in worlds of our own choosing.”
But when the bubble bursts, it causes anxiety and distress, according to Thompson. “You have to be quite confident to venture outside [of that bubble], and I think a lot of young people, in particular, lack that confidence. It might shake them,” she said. “But it’s important to make yourself uncomfortable, to venture out on Twitter a little and see what’s out there. We have to go through the looking glass and realise that there are opposing views.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
That means learning to tolerate uncertainty, even embrace it as a life skill. “Having information at our fingertips leads to a lack of practice tolerating uncertainty,” says Hendriksen. “And when we are faced with big uncertainties in life such as ‘what career path should I take?’ we are less able to deal with that because that muscle is underdeveloped.
“Let’s take COVID-19, we’re learning… there’s no right answer,” she adds. “Do I send my kid to summer camp because he’s been isolated for four months and I’m worried about his social development? Or do I not send him because it would increase the risk of exposure? Being able to tolerate uncertainty and make the best decision we can with the information we have at the time is an important skill to build.”
Intolerance of uncertainty can have a negative impact, said Thompson. She cites differences in the way that junior doctors and consultants handle medical uncertainty. “Take, for example, the case of an elderly lady coming into the hospital with some abdominal pain,” she says. “We perform a load of tests but can’t find what’s wrong, and she feels fine. The patient can go home, but the young doctors won’t discharge her because they don’t know what caused it. They perform endless tests because they can’t live with not knowing.”
Does not compute
Others argue that powerful computer models have raised our expectations – perhaps unreasonably. Conrad Wolfram, CEO of Wolfram Research Europe and author of The Math(s) Fix explains: “If you asked 50 or 100 years ago, do you think you can predict exactly what’s going to happen in this pandemic…[or] how hot it’s going to be in 50 years, most people would just think it was crazy that one could even attempt to do that. But computers, computation, maths are apparently able to predict anything precisely, or model anything, so people now flip this on its head and are sort of expecting the certainty.”
Our ability to quantify means often we’re looking in the wrong place. “Numbers or quantifying things has assumed importance beyond the ability to judge. If you’re driving a car, the main thing you’ve got to measure is your speed, but you could still be driving very badly. People assume that the quantification is sacrosanct,” says Wolfram.
Brett Frischmann, co-author of Re-Engineering Humanity, agrees. He cites the example of fitness trackers, which have been designed to track steps as a measure of how active we are. “The device tracks certain things because they’re easy or they’re more efficient, or it has the relevant sensor or the proxies are good. And that leads you to think that that’s the thing to track, step taking is the relevant measure of fitness,” he says. “In fact, it’s relying on bad proxies. I may have just moved my arm a bunch of times, and the motion fooled it. And so, I actually didn’t really take 10,000 steps.”
Instead, Wolfram says people and algorithms should work together. “People say we shouldn’t allow people to use a computer to calculate equations before they know how to do it by hand, because maybe the computer will fool you, [but] computers and other automated devices help us progress. The question is how you avoid being misled.”
There’s a need for what Wolfram calls “computational thinking”. “It is a new way of thinking. In a new world where we’re sharing with artificially intelligent machines, we as humans need to find a new way to ask the questions,” he said
Some of this thinking is evident in Wolfram Alpha, which he describes as a knowledge engine; distinct from a search engine. “If you think about a search engine, what it does is go out and ask if anyone else has a similar question and they try and answer it. You go to a library and ask for books on a subject and the librarian points you in the right direction.”
In contrast, Wolfram Alpha’s aim was to take “systematic knowledge” and make it computable, as Wolfram explains: “It’s more like a research assistant. You give it a precise question and ask it to come up with a precise answer.”
To produce these precise answers, Wolfram Research has bought information resources that sit behind Wolfram Alpha. “We’ve curated a lot from various sources that we’ve often had to clean,” said Wolfram. “It’s not a replacement for search, it just works differently, and it works well when you have something that is somewhat computational but also fuzzy.”
The company’s proprietary Wolfram Language is responsible for computing the answers from these sources. The programming language began life as Mathematica and is built from various “commands, thousands of functions and the largest set of built-in algorithms”.
“You’re presented with precisely what you’re asked for, or it may say, I don’t know the answer so that there’s a certainty in that lack of certainty,” said Wolfram.
An uncertain future
Uncertainty may actually be the key to delivering smarter programs, including AI. US startup Gamalon, for example, is developing a technique for teaching machines to handle language, which embraces ambiguity. The firm developed a way of teaching machines to process language that can sit behind a chatbot.
MIT Technology Review argues that it “lets a computer hold a more meaningful and coherent conversation by providing a way to deal with… multiple meanings. If a person says or types something ambiguous, the system makes a judgement about what was most likely meant.”
Applying ambiguity in other areas could make for smarter, safer AI. Carroll Wainwright is a research scientist at the Partnership on AI and specialises in the technical aspects of AI safety. Intelligent cars, he said, are an example where too much certainty can be dangerous. “There was a high-
profile death in one of the Tesla autopilot cases. A truck moved in front of the vehicle, and it was a cloudy day. The sky was white, and the truck had a big white side. The algorithm didn’t distinguish between the sky and the truck, so it ran into the truck. So, you could couch this as a problem with the certainty of the algorithm,” he said.
Wainwright is building an environment that will train AI “agents” to question when there’s a lack of certainty. The example he gives is that of an intelligent robot designed to fetch coffee from another room when the door is shut. The agent would stop and ask how to get through it, allowing you to open the door. “In this instance, uncertainty can be incredibly helpful. Essentially, the agent, rather than taking brash actions, will stop and ask you, ‘Should I open the door or should I just smash through it to get you your coffee faster?’”
It’s a strange example, but one that teaches us neither tech nor humans know it all. “We’re entering an era where the value add isn’t so much knowledge-based, knowing a fact, it’s knowing how to interact with AI or computational machinery to get the best decisions,” said Wolfram. “The only way to do that, I think, is for our education to step up to another level so that we can then know what the questions are, know how to manage the AI.”