Should sentient AI eventually be given 'human' rights?
Oxford University's Marcus du Sautoy recently spoke about the prospect of self-aware machines
The advancement of artificial intelligence may lead to sentient machines being granted 'human' rights, Oxford University professor for the public understanding of science, Marcus du Sautoy, has said.
Speaking at The Hay Literary Festival (via The Telegraph), Sautoy said: "It's getting to a point where we might be able to say this thing has a sense of itself and maybe there is a threshold moment where suddenly this consciousness emerges. One of the things I address in my new book is how can you tell whether my smartphone will ever be conscious.
"The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn't know how to measure it. But we're in a golden age. It's a bit like Galieo with a telescope. We now have a telescope into the brain and it's given us an opportunity to see things that we've never been able to see before. And if we understand these things are having a level of consciousness we might well have to introduce rights. It's an exciting time."
Currently the most common way for scientists to determine how 'intelligent' a computer is has been the Turing Test, which measures a machine's ability to exhibit behaviour indistinguishable from that of a human.
An emerging measuring tool, however, measures consciousness and self-awareness developed by scientists looking at the neural activity of the brain during sleep. Sautoy notes the moment babies recognise themselves in the mirror as an example of such a sense of self.
Harry Armstrong, a senior researcher for Technology Futures at innovation foundation Nesta, told IT Pro: "Despite recent advances, we are very far away from achieving any kind of self-determining machine 'consciousness'. Even if machine intelligence does happen, what that actually means is completely speculative; how they would think or 'act' is impossible to predict.
"AI is different to human intelligence: it is exceptionally powerful and narrowly focused, but it is terrible at generalising," Armstrong continued. "It is blind to context in a way that can feel quite alien. However, it can also serve to overrule humans who are more flexible, open to compromise and creative solutions. Ultimately, we need better systems to live better together."
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Caroline has been writing about technology for more than a decade, switching between consumer smart home news and reviews and in-depth B2B industry coverage. In addition to her work for IT Pro and Cloud Pro, she has contributed to a number of titles including Expert Reviews, TechRadar, The Week and many more. She is currently the smart home editor across Future Publishing's homes titles.
You can get in touch with Caroline via email at caroline.preece@futurenet.com.