As the name suggests, self-driving cars remove humans from the driver seats and, in the process, ensuring road safety by eliminating road accidents and causalities caused by human error, drunk driving, speeding, distracted driving (mostly due to mobile phones), and drowsiness.
Taking the steering wheels out of the hands of human drivers to avoid crashes and saves lives isn’t only the reason why self-driving cars are significant today. As Apple CEO Tim Cook stated, “self-driving cars are the mother of all AI projects.”
These supercomputers with wheels have some key technologies to get data for AI to process. Technologies like sensors, cameras, radar, and LiDAR generate data at unimaginable speed and capacity. These technologies function in tandem with state-of-the-art GPS and inertial measurement unit sensors (IMU) to pinpoint the location of the car down to a quarter of an inch. Additional radar control kicks in whenever an obstacle comes within about 15 to 30 feet of the vehicle. These sensors keep the car running.
Each component mentioned above plays a crucial role in moving and stopping the vehicle. Artificial Intelligence with cutting-edge algorithms, Machine Learning, Deep Learning, and Deep Neural Networks perform data analyzing and processing. These advanced hardware and software components are networked together to make decisions that are similar to human decisions.
To understand how the vehicle and human mind differs, let’s look at this famous invisible triangle illusion, an example of Gestalt perception. We might see triangles, but the fact is there are no triangles; the human mind is capable of filling up the things that don’t really exist. But, typically, a machine will not do that. If we want a machine to do that, we need to train it and feed it tons of data n a continuous process that will become accurate over a while.
Emulating human behavior is what we do in self-driving vehicles. To do that, we need data that is similar to what our brain needs. Data is an integral part of the whole cycle which flows from end to end. Data is what is fuelling the car. Just as fuel is used to operate the engine in a traditional car, self-driving cars use data to operate the vehicle. Let’s take a closer look at how the data is generated and what data flows from end to end to emulate human behavior in driving.
The table below shows the difference between a human-driven vehicle and a self-driving vehicle. Exact human actions are emulated by the machine; this is only possible because of data.
|Human-Driven Vehicle||Self-Driving Vehicle|
|• Data of surrounding gathered through eyes and ears||• Data of surrounding gathered through cameras, sensors, radar, and LiDAR|
|• Brain to compute the data from eyes and ears.||• Artificial intelligence enabled through computing power processes the data|
|• Brain signals hands and feet to act on the decision that it took||• Computer signals Control electronics to act on the decision it took.|
All self-driving or fully autonomous cars use data collection and data processing inside the car to achieve this. The block diagram shown below from the Chalmers University of Technology is the simplest representation of data flow and processing modules for autonomous driving.
Any self-driving cars will have as having five core components:
1. Computer vision is nothing but an eye of the car. It’s how we use images captured through the camera and figure out our surroundings. Humans can identify and describe what we are seeing/hearing within a millisecond. Similarly, with tons of data processed through Artificial Intelligence and Machine Learning, computers understand the image and identify its image. To achieve this, the computer is fed with tons of data, learning each image as data and data points. It is a continuous process.
2. Sensor fusion – how machines incorporate data from sensors like lasers and radar – is the next level of understanding the environment in a better way. Using data from camera images (computer vision), machines cannot calculate the distance between two objects. The vehicle processing system is designed for various scenarios and different environments, gathering data from lasers, radar, and LiDar and combines it to develop meaningful information. Data from each technology is important as some components don’t work or give proper output in specific weather or environmental conditions.
3. Localization is used to determine the precise location after understanding the environment using sensor fusion. This is the most important step for the car to decide when to move or stop. Car manufacturers use recorded data from the sensors and build a map of its surrounding. GPS is not enough to perform this kind of localization; centimeter-level precise localization is needed, which GPS does not provide. In the above diagram, localization comes in the perception module. This is achieved with mathematical algorithms and the high definition map data. Currently, we have centimeter-level accuracy, which is still not enough. The picture below shows how Waymo (Google’s Autonomous car) sees the world using the data collected through the various technologies mentioned above.
4. After building the localization map, vehicles use path planning methodology to chart a course and drive around, finding a way to navigate without hitting any static or dynamic obstacle. For this, information such as roads and lane connectivity also complements but this needs internal massive computational power as it is done on the go.
5. The final step is to control parts of the car, i.e., steering, accelerator, and brake. A computer transmits control data to controllers using electronic commands. For a fully autonomous car, these real components are actually not needed as everything is controlled internally.