The big gap between humans and machines is permanently narrowing. Thanks to the transformative power of technology such as AI and robotics; every one of us in one way or another is confronted with Human Machine Interaction (HMI) in our lives through devices like hearing aids, wearables, chatbots, collaborative robots, etc.
Whether it’s waking up to our digital radio alarm clocks in the morning, traveling to work in a car or train, using a laptop or desktop in our workplace, or communicating through mobile phones with friends and family, it’s safe to say that human and machine interaction is everywhere.
HMI is a field that has made great strides towards understanding and improving our interaction with computer-based technologies. For several decades, HMI researchers have been analyzing and designing sophisticated wearables, wireless and virtual devices that can facilitate better communications and achieve an increased level of automation and control for safety, performance, and efficiency in higher cognitive tasks.
In recent years, well-designed human-machine interfaces have gained high market value for many products and services in fields of application such as industrial, medical, transportation, service, home, and entertainment.
However, the current human-machine interaction model follows the most technically challenging modes that require explicit programming expertise. In most industrial settings, robots are controlled through a graphical user interface (GUI) and some iteration of a joystick/controller. Advances in GUI have focused on making control of robot behavior as simple as possible. However, even with these simplified user interfaces, inexperienced users struggle with failures in robot interaction.
Thankfully the advancements in noninvasive brain-machine interface, noninvasive body-machine interface, and brain swarm interface can significantly increase the applications of machines by allowing them to work around people more safely to adapt to changes in their environment or tasks without explicit input from a technician.
Moving forward, they will allow much better and efficient collaboration between non-technical personnel and robots, leading to explosive productivity growth and scalability. Moving away from the classical control paradigm is revolutionizing our ability to work alongside robots more intuitively, decreasing the rate of expensive errors, and increasing productivity.
1. Noninvasive brain-machine interface (BCI)
A typical direct brain interface produces relatively clean signals from a small number of neurons. These invasive brain implants require considerable medical and surgical expertise to install and operate correctly, not to mention the costs and potential risks to subjects.
An alternative is using a series of wearable sensors that can measure brain activity through noninvasive electroencephalography. These sensors, worn as a skullcap, detect, report, and quantify brain signals and can be used as outputs to control machines. This process takes some practice but allows users to direct a machine just with their thoughts, theoretically freeing their hands for other tasks.
In 2019, the Ben He Laboratory at the University of Minnesota had published studies showing that users wearing these noninvasive sensors can control a robotic arm. The lab developed a method for training users to use the interface by moving a cursor on a screen with a noninvasive electroencephalography interface. They found that after several sessions, the participants could not move blocks through 3D space at a theoretical speed with no redundancy or hovering.
In further research, the He laboratory found that the combination of spatial attention and motor imagery, using two measures as opposed to only motor imagery, could dramatically improve the ability to control the cursor in 3D movement tasks. They found that this resulted in an average information transfer rate of about 30 bits per minute. These results suggest that while noninvasive brain scans can allow for effective control of robots, the addition of other modes of control can dramatically improve a human pilot’s efficacy.
2. Noninvasive body-machine interface
The control of robots by noninvasive EEG requires a significant amount of practice. It is somewhat unintuitive. Historically, this technology has been developed to help people who are paralyzed interact with the world. In this case, speed, scalability, and rapid training time are less essential. However, the need for hesitation and recollecting one’s bearings is not necessarily appropriate for industrial applications. By using sensors attached to parts of the body we normally use to grasp or move, we may abolish the need for such extensive training and create a more intuitive machine control interface.
The Micera Laboratory has developed a body-machine interface for the control of airborne drones that allow an operator to rapidly fly the robot through an obstacle course using intuitive body motions. Their sensor suite includes kinematic markers and EMG electrodes and allows them to see through the drone’s eyes using a VR set. Using a series of sensors, the movement of specific muscles can be turned into precise signals that can be used to communicate commands to robots, particularly when combining torso and arm movements.
Interestingly, the Micera Laboratory used a suite of these sensors to allow subjects to control the flying drone as if they were a bird, using their arms and torso to direct the flight. This command structure allowed subjects to control the drone with less than a minute of training time, while subjects using a joystick took 8 minutes to achieve 20% less performance than the group controlling the drone by the sensor.
The EPFL laboratory has developed an exosuit that allows for the control of a flying drone similarly. This “flyjacket” comes with several sensors attached to the suit, a smart glove with sensors to detect hand motions, and a VR set to produce a body-machine interface for a drone’s natural control. The sensor outfit is also adjustable to a range of body sizes. The support and sensors provided by the flyjacket demonstrated dramatically improved values for RMS error and variance relative to joystick control in both clean and cluttered environments. Neither of these body-machine interface systems is commercially available at this time, but they represent near-commercial scale work being done by Universities.
While these body-machine interface systems have been primarily investigated for flight and direct teleoperation, it should be noted that the robust and specific data outputs from these systems can have a range of applications. Similar suits could be used to control an industrial robot or train an industrial robot to perform a range of context-dependent tasks with high precision. As our ability to incorporate multiple types of input increases, so does the scope of tasks performed by robots.
3. Brain swarm interface
Some large-scale tasks, such as agricultural labor, would greatly benefit from coordinated drone swarm behavior. However, this cannot be managed with traditional controls and would be a challenge to operate even with the control systems provided in body-machine interface sensor suites.
The Schwager Group at Stanford University has combined two types of inputs, eye tracking, and an EEG headset, to permit the simulated control of a 128-member robot swarm. In this system, the drone swarm’s location was controlled by detecting eye movement, and the density of the swarm was controlled by EEG interpretation. While the swarm could be controlled in physical space with this combination approach, there was significant variance in drone movement even when just controlling three drones.
Along these same lines, the Lennox Lab at the University of Manchester has developed a human-swarm interface focused around letting a human operator control the actions of a swarm of robots using a VR interface. They hypothesized that by moving the robots as if they were an omnipotent virtual giant with a combination of gestures picked up by a VR helmet, a human operator would handle the incredible complexity of controlling a drone swarm more easily.
In their recent paper, the Lennox Lab demonstrated this interface’s ability to allow humans to, with minimal training, control the movement of multiple drones at once. It may well be that as the sensitivity of EEG, machine vision, and control algorithms improve, the control demonstrated by these systems may improve. These proof of concept studies suggest that the combination of eye-tracking and human-machine interfaces may permit higher-precision drone control and the effective use of drone swarms for industrial applications.