Robot deception in human-robot interactions – What is your take?

Deception is a ubiquitous behavior among insects, animals, and humans. Most biologists and psychologists agree that deception is a deliberate attempt to send false or inaccurate information that tends to benefit the communicator. Animals use this essential survival behavior to gain advantage from others. For example, they use camouflage and mimicry to resemble other species or inanimate objects so that they don’t get detected by their predators or prey.

The spider genus Portia, which preys primarily on other spiders, deceives their prey by vibrating the web in a way that resembles a small insect getting ensnared. When the web’s resident spider comes to investigate the insects, Portia preys on it. Human deception, on the other hand, which routinely happens in sports like in rugby and soccer, often requires planning and second-guessing when compared to what most animals are capable of. People have consistently used deception in warfare to cloak their intentions and movements. In a military context, deception means any planned measures undertaken for purposes of misleading or deceiving the enemy.

Robots can also potentially gain benefits over adversaries by possessing deceptive behaviors, especially in military applications. Therefore, understanding animal and human deception is vital to design robot deception capabilities, not only in military applications but in human-robot interactions (HRI) in everyday context in which robots are treated as socially-intelligent robot agents.

In spite of its ubiquity in nature and its potential benefits, very few studies have been conducted on robot deception to date. One interesting application in robot deception is the camouflage robot, developed at Harvard University. This bioinspired soft robot can automatically change the color of the body to match its environment. Wagner and Arkin developed algorithms that allow a robot to determine both when and how it should deceive others. Recent work at Georgia Tech is exploring the role of deception according to Grafen’s dishonesty model in the context of bird mobbing behavior.

Another study applied squirrel’s food protection behavior to robotic systems and shows how a robot successfully uses this deception algorithm for protecting a resource. Terada and Ito demonstrated that a robot could deceive a human. Work at Yale University illustrated a cheating robot in the context of a rock-paper-scissors game.

Another interesting work from the University of Tsukuba shows that a deceptive robot can improve the learning efficiency of children. The robot acted as an instructor, but deliberately made mistakes and behaved as if it did not know the answer. By showing these unknowing/unsure behaviors, the learning efficiency of the children significantly increased.

Despite the potential benefits, robot deception tends to become a controversial topic due to ethical considerations, as robot ethics is a rapidly expanding area today. Is deception acceptable? Should a robot be allowed to lie? Some argue that robots that recognize the need for deception to take advantages in terms of outcome compared to robots that do not acknowledge the need for deception might also select the best deceptive strategy to avoid getting caught.

In an experiment to explore robot deception in multiplayer robotic games, the Carnegie Mellon University researchers found that a robot referee was able to deceive participants by taking advantage of its assumed superior abilities. The robot was designed to act with hidden intentions and deceive game players by imperceptibly balancing how much they won, with the hope this behavior would make them play longer and with more interest. With this, the researchers concluded that it would be easy for roboticists to develop machines highly capable of persuasion through deception.

It is a complicated topic to discuss whether machines should be given the authority to deceive a human being. Some experts argue that the use of robot deception should be in suitable domains. Military robot deception, for example, is acceptable as long as it is in accord with the Laws of War. However, considering the use of robot deception in human-robot interaction, it is currently difficult to state what if any situations constitute appropriate use.