Read Google's five rules for human-friendly AI
Google updates Asimov's Three Laws of Robotics for AI developers

Google has come up with five rules to create human-friendly AI - superseding Isaac Asimov's Three Laws of Robotics.
The tech giant, whose DeepMind division recently devised an AI capable of beating the world's best Go player - believes AI creators should ask themselves these five fundamental questions to avoid the risk of a singularity in which robots rule over humankind.
Google Research's Chris Olah outlined the questions in a research paper titled Concrete Problems in AI Safety, saying: "While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative.
"We believe it's essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."
Published in collaboration with OpenAI, Stanford and Berkley, the paper takes a cleaning robot as an example to outline the following five rules.
Avoiding negative side effects: Ensuring that an AI system will not disturb its environment in negative ways while completing its tasks.
Avoiding reward hacking: An effective AI needs to complete its task properly without cutting corners.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Scalable oversight: AI needs to learn from feedback, and should not need continuous feedback from a human programmer.
Safe exploration: AI needs to avoid damaging objects in its environment as it performs its task.
Robustness to distributional shift: AI should be able to adapt to an environment that it has not initially been conditioned for, and still perform.
Google has thrown much of its resources at developing deep learning and AI, amid a backdrop of fear of robots, voiced by luminaries including SpaceX founder Elon Musk and scientist Stephen Hawking.
DeepMind is working on a failsafe that would effectively shut off AI in the event it attempted to disobey its users.
Other firms including Microsoft are exploring AI, getting AI to tell stories about holiday photos, and debuting its tween chatbot, Tay, which spouted rude replies on Twitter.
-
The Era of Hybrid Cloud Storage
Whitepaper
By ITPro
-
Women show more team spirit when it comes to cybersecurity, yet they're still missing out on opportunities
News While they're more likely to believe that responsibility should be shared, women are less likely to get the necessary training
By Emma Woollacott
-
Modern enterprise cybersecurity
whitepaper Cultivating resilience with reduced detection and response times
By ITPro
-
Where will AI take security, and are we ready?
whitepaper Steer through the risks and capitalize on the benefits of AI in cyber security
By ITPro
-
Building a strong business case for GRC automation
whitepaper Successfully implement an innovative governance, risk & compliance management platform
By ITPro
-
Three ways risk managers can integrate real-time controls to futurize operations at the bank
Whitepaper Defining success in your risk management and regulatory compliance
By ITPro
-
insideBIGData: Guide to energy
Whitepaper How big data can help energy companies manage intense disruption
By ITPro
-
Death of the tick mark
Whitepaper How to prevent internal audit becoming obsolete
By ITPro
-
Recommendations for managing AI risks
Whitepaper Integrate your external AI tool findings into your broader security programs
By ITPro
-
Cambridge University boffins to combat rise of the machines
News Scientists to look at ways to see off Terminator-style threat
By Rene Millman