Across many application domains, robots are working in human environments, side by side with people. These interactions between robots and their users take many forms, from a trained operator supervising several industrial robots, an older adult receiving care, a rehabilitation robot, to a socially assistive robot with social, cognitive, and emotional skills.
Since the users in Human-Robot Interaction (HRI) vary substantially in their background, training, physical and cognitive abilities, all robotic products are expected to be intuitive, easy to use, and responsive to the needs and states of their users. This makes human-robot interaction a key area of research.
Over the past decade, the state of HRI has advanced significantly. Social HRI systems can improve the quality of life for children with autism and for older adults with dementia.
Learning from demonstration has enabled robots to learn new skills by being shown how to do it by a person. Teleoperation interfaces have been developed to monitor and control larger numbers of robots while adding augmented and virtual reality control to more traditional screen-based interfaces. Assistive robot systems, from exoskeletons to rehabilitation robots, have been deployed in the real world. Brain-control interfaces (BCI) have allowed people with quadriplegia to feed themselves with a robot arm by thinking about moving the robot.
Despite this progress, there is a pressing need for much more research and development. This post will ask some of the critical questions that indicate the current challenges in HRI systems.
1. How do we develop robots that understand humans?
People have trouble understanding each other’s views, beliefs, intentions, and actions. Machines are much worse at it than people. Much of what drives human behavior is hidden (e.g., histories, hopes, dreams), and the observed behavior may be confusing (e.g., mixed signals, sarcasm). Human behavior is also highly varied and diverse across numerous dimensions, including context, culture, familiarity, fatigue, etc., resulting in unpredictability.
The quest to understand people better to interact more effectively spans robotics, machine vision, speech and other signal processing, and machine learning. Robotics offers a unique enabler by providing embodied social partners that can be physically present around people to collect data from natural interactions to further understand people better. However, such large multi-modal datasets require extensive time and resources to annotate and analyze; there is a great need for practical open-source tools for automated multi-modal human behavior data annotation. Further advances in sensory modalities (e.g., unencumbering wearables) and signal processing, as well as many more data sets, will be critical for progress in this area.
2. How do we develop robot systems that can interact at a variety of timescales?
Some aspects of robot control deal with high-speed interactions with the environment, but HRI is unique in that it necessitates a wide range of temporal dynamics: interactions that happen very quickly (a wink or a twitch of the mouth), interactions that happen very slowly (gradually becoming accustomed to a pattern of behavior), and interactions that change unexpectedly (due to context or intent inaccessible to the robot). To be practical work and social partners, robots must perceive, understand, react, and adapt to such interaction at the right timescales, from providing an interaction repair immediately based on a missed social cue or almost dropping a fragile object being carried together.
Humans can make fast impressions and snap judgments that are difficult to shift once formed, can be open to adapting to valued collaborators and partners, and can be tolerant, empathetic, and compassionate to vulnerable social partners. Robots will have to fit into appropriate roles in HRI, project realistic expectations, motivate interactions, and manage the inevitable occasional failures. Finally, robots meant for long-term interaction, such as in-home service and companion robots, must utilize new types of machine learning to adapt to each individual user and the dynamics with that user over time.
3. How do we foster appropriate levels of trust in robot systems so that systems are used correctly — and not over-or under-trusted?
To trust robots, users must intuitively understand the robot’s capabilities and intent and feel appropriate authority and autonomy relative to the robot. To relate to people, robots must appropriately assess what users trust and how much to trust them; they must communicate their level of trust to users. While some research has been conducted to determine what factors influence robot trust, there is a need to develop more robust models to enable the development of trust.
4. How do we design HRI to facilitate user acceptance?
Acceptance and adoption of robots in human society is part of the age-old broader challenge of societal acceptance of technology. Unlike automation, HRI keeps the human in the process. Therefore, the acceptance and adoption of HRI hinge on its ability to meet the users’ expectations and needs. This is a complex challenge since sometimes repeatable and reliable robot behavior, as specified, may become boring while unexpected robot behavior, even if inaccurate, maybe entertaining and preferred in some contexts.