Don't hit the panic button just because AI hype is high
AI can't replace us because we're emotionally complicated, but don't let those emotions rule it out
According to a recent Microsoft press release, almost half of British companies think that their current business models will cease to exist in the next five years thanks to AI - but 51% of them don't have an AI strategy.
While I could describe that as panic-mongering, I won't. It's more like straightforward marketing: as Microsoft is currently heavily promoting its AI Academy, AI development platforms and training courses, it's merely AI bread and butter. But the idea of subtly encouraging panic for economic ends is, of course, as old as civilisation itself.
In his book On Deep History and the Brain, historian Daniel Lord Smail described the way that all social animals - from ants to wolves to bonobos to humans - organise into societies by manipulating the brain chemistry of themselves and their fellows. This they do by a huge variety of means: pheromones; ingesting drugs; performing dances and rituals; inflicting violence; and, for us humans, telling stories. It's recently been discovered that bees and ants create the division of labour that characterises their societies by a remarkably simple mechanism. The queen emits pheromones that alter insulin levels in her "subordinates", which changes their feeding habits and body type.
And stories do indeed modify the brain chemistry of human listeners, because everything we think and say is ultimately a matter of brain chemistry: that's what brains are - electrochemical computers for processing experience of the world. The chemical part of this processing is what we call "emotion", and the most advanced research in cognitive psychology is revealing more and more about the way that emotion and thought are intertwined and inseparably linked. Which is why AI, despite all the hype and panic, remains so ultimately dumb.
All animals have perceptual systems that sample information from their immediate environment. But animals also have emotions, which are like coprocessors that inspect this perceived information to detect threats and opportunities. They attach value to the perceived information - is it edible, or sexy, or dangerous, or funny? - which is something that can't easily be inferred from a mere bitmap.
The leading affective neuroscientist Jaak Panksepp discovered seven emotional subsystems in the mammalian brain, each mediated by its own system of neuropeptide hormones: he called them SEEKING (dopamine), RAGE and FEAR (adrenaline and cortisol), LUST and PLAY (sex hormones), CARE and PANIC (oxytocin, opioids and more).
Neuroscientist Antonio Damasio further proposes that memories get labelled with the chemical emotional state prevailing when they were laid down, so that when recalled they bring with them values: good, bad, sad, funny and so on.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
AI systems could be, probably will be, eventually enabled to fake some kinds of emotional response, but to really "feel" they would need to have something at stake. Our brains store a continually updated model of the outside world, plus another of our own body and its current internal state, and process the intersection of these two models to see what is threatening or beckoning to us.
Meanwhile, our memory stores a more or less complete record of our lives to date, along with the values of the things that have happened to us. All our decisions result from integrating these data sources. To provide anything equivalent for an AI system will be staggeringly costly in memory and CPU.
Which isn't to say that AI is useless - far from it. AI systems can excel at kinds of reasoning in which we are slow or error-prone, precisely due to the emotional content of our own reason. Once we stop pretending that they're intelligent in the same way as us and acknowledge that they can be given skills that complement our own, AIs become tools as essential as spreadsheets. The very name "artificial intelligence" invites this confusion, so we'd perhaps better call it "artificial reasoning" or similar.
We need to stop panicking. If you design an AI that fires people to increase profits, it will. If you design it to kill people, it will. But the same is true of human accountants and soldiers. Lacking emotions, an AI can never have its own interests or ambitions, so it can never be as good or as bad as we can.