Stephen Hawking signs open letter against AI "pitfalls"
Elon Musk also signs letter warning AI researchers to make society-friendly robots
Stephen Hawking and tech investor Elon Musk have signed an open letter calling for greater focus on making artificially intelligent robots do just what we tell them to.
They join dozens of scientists, professors and experts who have also signed the Future of Life Institute's (FLI) letter, which comes as leading visionaries warn against AI's potential threat to human jobs.
Scientist Hawking even warned that AI could spell the end for humanity last month, while Telsa Motors founder Musk has spoken of the potential for "a dangerous outcome" from AI research.
The FLI's letter calls for researchers to create "robust and beneficial" AI, while avoiding any "potential pitfalls".
It reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
Other signatories include Google researchers, Oxford and Cambridge professors and three co-founders of DeepMind, a startup bought by Google for $400 million.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
DeepMind claims to have created an advanced neural network that allows machines to store short-term memories and learn from them.
The company said this creates the ability for machines to operate beyond the initial capabilities of their programming.
Space X founder Musk said last October that AI could be our "biggest existential threat", suggesting that regulatory oversight could be necessary to safely develop robots that pose no threat to the human race.
He told delegates at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence.
"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."
However, the open letter also outlined hopes that smart robots could help solve various human crises such as the spread of diseases.
"The eradication of disease and poverty are not unfathomable," it said.
Google's chairman, Eric Schmidt, called concerns over AI "misguided" last month.