Since mobile robots are constantly moving in the environment, they depend heavily on exteroceptive sensors, which concentrate on a central task for the robot, i.e., acquiring information on objects in the robot’s immediate vicinity so that it may interpret the state of its surroundings.
They acquire information outside the system from the environment, such as distance measurements, light intensity, and sound amplitude, which are then extracted by software to form meaningful information about the environment.
Exteroceptive sensors generally provide absolute information directly related to the position and orientation of the system. They require dedicated infrastructure or prior knowledge about the surrounding.
Almost every, if not all, the exteroceptive sensors suffer from environmental errors caused by motion. For instance, a sonar sensor will produce specular reflections on a smooth sheetrock wall at specific angles. As the robot moves, the reflection angles occur at stochastic intervals. This results in inaccurate range measurements.
Another example is cameras. The robot’s motion causes a continuous change in the camera image’s lighting, glares, and reflections. In addition, different materials in the environment can have big differences in their reflectivity. Besides, Exteroceptive sensors can be easily disturbed, jammed, or spoofed.
This post will discuss various types of exteroceptive sensors available for mobile robots.
Compasses have been used for many types of navigation for a long time. They utilize the earth’s magnetic field to determine the user’s current orientation. For mobile robotics, two types of compasses are commonly used: Hall effect and flux gate compasses. However, indoor environments often contain objects that either cause a disturbance in the magnetic field or have their magnetic fields, both of which result in inaccurate measurements of the compass being used. For these reasons, compasses are rarely used in indoor applications.
The most common modern beacon system is the global positioning system (GPS), which is highly efficient in outdoor applications. With satellites orbiting the earth, any object with a GPS receiver can localize itself. The object receives signals from multiple satellites, which are used to compute the relative distance to these satellites. Combining this information can be used to accurately localize the object using geometry. However, GPS is rarely used in indoor applications as the building structures block satellite communication, resulting in inaccurate measurements.
Active-ranging sensors have been the most popular sensors in mobile robotics. This is due to their affordability and ability to measure distances to objects in the robot’s vicinity. Knowing the position of close objects is crucial in mobile robot navigation, which is why they are heavily used in obstacle detection and avoidance. Moreover, they are also used to localize the robot and model the environment. The popularity of these sensors can be expected to stay high, as only vision sensors have the potential to replace them due to the development in computer vision.
The most popular type of these sensors depends on time-of-flight active (TOF) ranging. TOF ranging sensors fire sound or electromagnetic waves to the environment and measure the time it takes for the reflections to bounce back to the sensor to calculate distances to objects. The most popular TOF sensors are ultrasonic, laser rangefinders, and TOF cameras.
Ultrasonic sensors emit sound waves to the surroundings. The waves are sent at certain intervals, while a threshold value is used to detect valid reflections. Many operating ranges can be chosen for an ultrasonic sensor, but mobile robotics commonly use a range of roughly 12 cm to 5 m. If a bigger operating range is chosen, it requires a different frequency for the sound waves, increasing the lower detection range too much for close objects.
Ultrasonic sensors have multiple drawbacks. As sound propagates in a cone, the further the objects are, the less accurate the measurements are. In addition, some non-reflecting materials, such as clothing or fur, may be difficult to work with, as they can absorb some or all sound waves. Lastly, as the speed of sound is rather slow, it automatically limits the operating speed of the sensor. Due to these reasons, mobile robotics commonly favor a laser rangefinder instead.
A laser rangefinder, often called lidar (light detection and ranging), is a highly popular sensor in mobile robotics due to its high operation speed and range. Lidars measure the environment with laser light, which achieves significant improvements compared to the use of sound due to light’s fast propagation. A rotating mechanical mirror sweeps a light beam to the environment in a plane, sometimes even in three dimensions. A receiver is then used to detect the reflections for distance estimations.
Some lidars can measure distances in three dimensions. To do this, the mirror system is usually nodded during operation, or additional lasers are added. This lidar has a 360-degree horizontal view and a 30-degree vertical view. However, the drawbacks of 3D lidars compared to 2D lidars are that they are usually more expensive, and the 3D scan might take longer.
Visual perception is the most powerful sense of humankind. Humans use vision for multiple tasks such as localization, motion and distance estimation, and navigation. These tasks are also extremely important in mobile robotics, and it is not surprising that a great effort is put into improving vision-based technology. The most common vision sensors are different types of cameras with similar operating principles. The light that enters the camera forms digital images of the environment. The images are then processed to get meaningful information such as depth, motion, color tracking, feature detection, scene recognition, etc.
CMOS and CCD are the two main types of sensors to capture the light entering cameras. However, cameras generally favor CMOS due to its low price and lower power consumption than the CCD sensor. The CCD sensor is still used for applications with high-quality requirements.
Compared to previous sensors, cameras capture much information about the environment. A lot can be extracted from a single image, which is why cameras are considered the potential for robotic applications. However, all of them suffer from a variety of challenges. The images often depend on environmental light and do not work well under too little or too much light. Images can also suffer from jitter, signal gain, blooming, and blurring noises.
The operating principle of a Time-of-light (TOF) camera is like one of the lidars. The environment is illuminated with non-visible light, and the reflections are received to form a depth mapping. TOF cameras do not have any moving mechanical parts and capture the whole 3D scene at the same time. A Photonic Mixer Device is used with phase shift measurements to determine the distance value for each pixel in the captured image. The advantages of TOF cameras are that they are compact and the creation of depth images is fast. After a brief internet study, it was noted that TOF cameras are most popular in unmanned aerial vehicles such as drones. This is because TOF cameras can work in a low-light environment while providing high-resolution data.
A camera with a wide field of view (FOV), commonly over 180 degrees, is considered an omnidirectional camera. Multiple ways to achieve such wide FOV exist, like with the use of shaped lenses or mirrors. Cameras using shaped lenses usually achieve a FOV of slightly over 180 degrees. Cameras using parabolic, hyperbolic or elliptical mirrors can achieve a FOV of even 360 degrees in azimuth and 180 degrees in elevation. A full omnidirectional view can be achieved by combining multiple cameras with overlapping FOVs.
Omnidirectional cameras can be used for mobile robot localization, mapping, and navigation. The wide FOV allows the possibility to see in multiple directions, thus making recognizing places easier than standard cameras. The possibility of tracking more objects over long periods makes estimating the robot’s motion and the environment map building more accurate.
In addition to TOF cameras, stereo vision can be used to gain depth information about the environment. Stereo vision uses two cameras placed to take images from the same scene. The disparity between the images is analyzed to form depth information. Stereo vision can be used for multiple applications such as obstacle detection and avoidance, navigation, and simultaneous localization and mapping (SLAM).
Traditionally, the bottleneck has been the computational complexity of these systems. However, due to technological advances, stereo vision is more affordable. Commercial systems such as ZED can directly access the depth data.