Gartner urges CIOs to consider AI ethics
A new report says CIOs must guarantee good ethics of "smart machines" in order to build trust
CIOs must concentrate on the ethics of "smart machines" - or AI - in business in order to build and maintain trust around the science fictional technology, a report from Gartner has found.
While the world is far from developing an artificially intelligent robot, the analyst house has released a report examining the importance of ethics in what it terms smart machines - whether they be connected Internet of Things (IoT) devices or autonomous robots.
Frank Buytendijk, research vice president and analyst at Gartner, said: "Clearly, people must trust smart machines if they are to accept and use them.
"The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology.
"CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones."
To achieve this, Gartner has identified five programming levels including "Non-Ethical Programming" (limited ethical responsibility from the manufacturer), "Ethical Oversight" (responsibility rests with the user), "Ethical Programming" (responsibility is shared between the user, the service provider and the designer), "Evolutionary Ethical Programming" (tasks begin to be performed autonomously), and "Machine-Developed Ethics" (machines are self-aware).
It is by level three, "Evolutionary Ethical Programming", that trust in smart machines becomes more important, with user control lessening as the technology's autonomy increases.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The report notes that level four, at which machines become self-aware, is unlikely to come about in the near future.
"The questions that we should ask ourselves are: How will we ensure these machines stick to their responsibilities? Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?", added Buytendijk.
At the beginning of the year, Stephen Hawking spoke about the coming of artificial intelligence, signing the Future of Life Institute's letter warning against the potential threats AI could bring about.
The letter reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
In contrast to this, chief of Microsoft Research Eric Horvitz claimed that "doomsday scenarios" surrounding AI are unfounded, saying: "There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen.
"I think we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economic to daily life."
Caroline has been writing about technology for more than a decade, switching between consumer smart home news and reviews and in-depth B2B industry coverage. In addition to her work for IT Pro and Cloud Pro, she has contributed to a number of titles including Expert Reviews, TechRadar, The Week and many more. She is currently the smart home editor across Future Publishing's homes titles.
You can get in touch with Caroline via email at caroline.preece@futurenet.com.