Researchers develop a robot that can lie
The digital age of innocence is truly over.
A team at Georgia Tech Research Institute in the US is teaching robots how to deceive other machines and humans.
It may sound like a plot for a movie but the researchers have a serious aim. Robots are being used increasingly for search and rescue missions.
In a scenario that echoes the Terminator movies, it is possible that two robots from opposing armies may be involved in the same mission.
One robot might be clearing mines and the other is sent to prevent it from completing its task. The ability to recognise the situation and then be able to hide would be an essential aim for the first robot.
In the researcher's terms, circumstances for deception had to satisfy two main criteria. The first is that there must be a sense of conflict between the robots. Second, there must be a benefit in performing the deception.
Ronald Arkin, a professor in the Georgia Tech School of Interactive Computing, and his team developed software for the robots and set up a "playground" for a hi-tech game of hide and seek.
Coloured marker pins were set up pointing to three possible hiding places for the first robot to choose. The markers stopped short of the actual safe areas.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The twisted intelligence incorporated in the programming allowed the lying robot to select a hiding place and then devise a path that would fool the chasing robot.
As the robot moved, it knocked over the coloured pins, leaving a track of its movements. The machine headed directly for a particular hiding place and, when it reached the unmarkered ground, quickly changed direction to outfox the chasing robot.
In 40 trials, the lying robot successfully fooled its pursuer 75 per cent of the time.
Small beginnings and no full marks for the robot but Arkin said that it demonstrated that machines could learn and use deceptive signals.
He said, "The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behaviour in a robot."
The team envisage taking their research on to create more sophisticated robots. In the future, a rescue robot may need to deceive a human in order to calm them down in a dangerous situation.
Arkin said that he sees the work of his team as an important development but realises the ethical issues that surround the concept of deceitful robots.