Gartner urges CIOs to consider AI ethics
A new report says CIOs must guarantee good ethics of "smart machines" in order to build trust


CIOs must concentrate on the ethics of "smart machines" - or AI - in business in order to build and maintain trust around the science fictional technology, a report from Gartner has found.
While the world is far from developing an artificially intelligent robot, the analyst house has released a report examining the importance of ethics in what it terms smart machines - whether they be connected Internet of Things (IoT) devices or autonomous robots.
Frank Buytendijk, research vice president and analyst at Gartner, said: "Clearly, people must trust smart machines if they are to accept and use them.
"The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology.
"CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones."
To achieve this, Gartner has identified five programming levels including "Non-Ethical Programming" (limited ethical responsibility from the manufacturer), "Ethical Oversight" (responsibility rests with the user), "Ethical Programming" (responsibility is shared between the user, the service provider and the designer), "Evolutionary Ethical Programming" (tasks begin to be performed autonomously), and "Machine-Developed Ethics" (machines are self-aware).
It is by level three, "Evolutionary Ethical Programming", that trust in smart machines becomes more important, with user control lessening as the technology's autonomy increases.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The report notes that level four, at which machines become self-aware, is unlikely to come about in the near future.
"The questions that we should ask ourselves are: How will we ensure these machines stick to their responsibilities? Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?", added Buytendijk.
At the beginning of the year, Stephen Hawking spoke about the coming of artificial intelligence, signing the Future of Life Institute's letter warning against the potential threats AI could bring about.
The letter reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
In contrast to this, chief of Microsoft Research Eric Horvitz claimed that "doomsday scenarios" surrounding AI are unfounded, saying: "There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen.
"I think we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economic to daily life."
Caroline has been writing about technology for more than a decade, switching between consumer smart home news and reviews and in-depth B2B industry coverage. In addition to her work for IT Pro and Cloud Pro, she has contributed to a number of titles including Expert Reviews, TechRadar, The Week and many more. She is currently the smart home editor across Future Publishing's homes titles.
You can get in touch with Caroline via email at caroline.preece@futurenet.com.
-
The Race Is On for Higher Ed to Adapt: Equity in Hyflex Learning
By ITPro
-
Google faces 'first of its kind' class action for search ads overcharging in UK
News Google faces a "first of its kind" £5 billion lawsuit in the UK over accusations it has a monopoly in digital advertising that allows it to overcharge customers.
By Nicole Kobie
-
Generative AI has had "no material impact" on IT spending
News 2025 could be a watershed year for generative AI-related IT spending
By Ross Kelly
-
More than half of firms now using generative AI
News Nearly half of firms are now using generative AI tools in full production, compared to just 4% in March
By Rory Bathgate
-
Gartner urges CISOs to adopt new forms of trust and risk management for AI
News CISOs will need to deploy new strategies for best-case implementations of AI
By Rory Bathgate
-
AI security tools see mounting investment as businesses scramble to mitigate generative AI’s issues
News Generative AI providers don't currently have the confidence of business leaders when it comes to sending sensitive data to their clouds
By Rory Bathgate
-
Software engineers must embrace generative AI or risk job progression, Gartner says
News Leaders will be expected to embrace more nuanced skills related to generative AI as its popularity builds
By Ross Kelly
-
AI chips revenue to reach $53 billion in 2023, Gartner predicts
News Demand for customized AI hardware is driving huge growth in the market
By Rory Bathgate
-
Gartner peer insights: Voice of the customer
Whitepaper Master data management solutions
By ITPro
-
Gartner big data research suggests growing IT director interest
News Market watcher cites competitive fears and greater understanding as reasons for growing interest in big data.
By Caroline Donnelly