Top six vulnerabilities in robotic systems

robotics

Innovation and technology have resulted in a constant evolution of robotics over the last few decades. Robotic systems are rapidly becoming more prolific, sophisticated, capable, intelligent, and networked and are being used for increasing tasks.

Initially, the field of Robotics was restricted to the manufacturing world, but now robots can perform complex work alongside humans, expanding productivity in less time. The more integrated they are with sensitive tasks and their users’ lives and likelihoods, the greater desirability they possess as targets to attackers.

Like regular computers, robots can be targeted by numerous security attacks with malicious goals. It has been estimated that the Robotics and automation industry will grow from $62 billion to $1.2 trillion in the next ten years. Therefore, roboticists must consider how these devices can be made secure.

Cyber security breaches in robots will harm robotics, thus damaging the financial aspects and reputation. A hacked service robot could be used to harm people or deliberately perform malfunctions.

Cyber security in robotic systems will only grow as robots become more common, are tasked with more important missions, and are granted more autonomy. This article will examine some of the top vulnerabilities in robotic systems.

Physical Vulnerabilities

Physical vulnerabilities are exploited by physical access to or contact with the robot or controlling devices. Physical access can result in robot components being reprogrammed or tapped into robots being rendered unavailable or modified to grant an adversary control. For instance, several physical vulnerabilities exist in cars, including diagnostic tools used by mechanics, and devices, which have been used to gain full control over a car’s systems. Passive keyless entry systems in cars are particularly vulnerable to relay attacks, which capture signals from a car’s keys and transmit them to a device close to the car, gaining complete access by pretending to be the keys.

Sensor Vulnerabilities

Sensors are vulnerable to adversary-manipulated signals. One example is GPS spoofing, which gets a GPS sensor to use fake GPS signals resulting in false location information. Adversary-generated signals can interfere with sensors, leading to invalid or no output. For instance, cars’ automatic braking systems can get false wheel speed information from magnetic devices attached to tires, leading to crashes in simulated experiments.

Communication Vulnerabilities

Many robots rely on communication with either user or the outside world, and these communication methods present vulnerabilities, especially when tried, and true security measures are not followed.

Robots employ many communication methods: ad-hoc networks or the Internet, Bluetooth for short-range connection of personal computing devices and phones, wireless sensors (e.g., Tire Pressure Monitoring System in cars), and RFID for wireless communication.

Long-range communication channels include satellite and digital radio and traffic status channels, where information is mainly received. Other channels are for crash reporting, anti-theft car-tracking, vehicle diagnostics, and user convenience (e.g., GM’s OnStar), which require information transmission. Future robotic systems might have similar long-range monitoring and communication capabilities.

  • Passive Adversary Vulnerabilities – Information about a robot can be passively gathered from communication channels. This can be done using packet interception or injection over the local network or Internet. Search engines could conceivably discover Internet-connected robots, much like was done with webcams. By intercepting traffic on a communication channel, or eavesdropping, adversaries can gain more information, such as which robots are active; what they are doing; where they are, or sensitive user information like audio or video data from sensors or unencrypted usernames and passwords.
  • Active Adversary Vulnerabilities – Many communication vulnerabilities involve a more active adversary, such as intercepting legitimate network traffic and/or transmitting illegitimate traffic. Dropped messages are deleted and do not reach their destination. Intercepted messages can be retransmitted later, which is called a replay attack. Illegitimate messages, either completely constructed or modified, can be transmitted, which is message spoofing. A masquerade attack involves an adversary imitating an authorized party. A man-in-the-middle attack, or double masquerade, occurs when robot-recipient traffic is intercepted. Communication channels can also be closed through jamming or Denial of Service (DoS) attacks. DoS attacks flood a network with messages, preventing appropriate handling of legitimate traffic.

Software Vulnerabilities

Any software has vulnerabilities from poor programming practices or a lack of security consideration during design. For example, the CAN bus in cars, the electronic connection between different components, was designed without a focus on security. Subsequent security protocols have often been poorly implemented and misused. ROS suffers from similar design issues. ROS nodes are killed when another (i.e., illegitimate) node with the same name is connected to the ROS network. Connected nodes could also generate fake messages on a topic published without any authentication or validation.

System-level Vulnerabilities

Security vulnerabilities can arise in robotics systems as they become more complex, integrating various subsystems from manufacturers and designers and increasing the opportunities for security problems at the interfaces due to subsystem interactions. This has been true for cars. These difficulties will also be the case as robotic systems become more complicated.

User Vulnerabilities

A robot’s users and environment can impact its security. An adversary could discretely deploy or modify a robot in chaotic, cluttered environments. Users with special needs, elderly people, or children will likely be among service robots’ early users and are less likely to have security experience, making them more vulnerable to attacks that wouldn’t succeed against more experienced users. The robot–user interaction, for example, how a robot and user get feedback from each other, is a potential source of vulnerabilities. An adversary could modify this behavior in either direction, causing a user to issue unnecessary or repeated commands or the robot to behave abnormally or dangerously.

The most common ways in which a robotic system can be compromised:

  • Information disclosure: technical materials available on the manufacturers’ website, including software images;
  • Outdated software: custom patches applied by manufacturers to update the software, create opportunities for attackers to leverage software vulnerabilities;
  • Default authentication: remote connections enable attackers to compromise devices through null or ”admin” default password;
  • Poor transport encryption: for example, symmetric keys for VPNs or web-based administration are not available on HTTPS;
  • Poor software protection: attackers can manipulate software images (e.g., debug information) that are available on the manufacturer’s website.
  • Security by obscurity: Poor information about robots may lead to unclear security