Four reasons to ban lethal autonomous weapon systems (LAWS)

LAWS (lethal autonomous weapon systems), also called “killer robots”, is a special kind of weapon that uses sensors and algorithms to autonomously identify, engage, and destroy a target without manual human intervention. To date, no such weapons exist. But surely, some weapons can track incoming missiles and autonomously strike those threats without any person’s involvement. According to some people, in just a few years, if not sooner, this technology will be advanced enough to use LAWS against people.

Therefore, a growing number of nations, states such as the United Nations (UN) Secretary-General, the International Committee of the Red Cross (ICRC), and non-governmental organizations are appealing to the international community for regulation or a ban on LAWS due to a host of fundamental ethical, moral, legal, accountability and security concerns. The demands for a ban killer robots have firm support from more than 70 countries, over 3,000 experts in robotics and artificial intelligence, including leading scientists including Stephen Hawking, Elon Musk (Tesla) and Demis Hassabis (Google), and 116 artificial intelligence and robotics companies, 160 religious leaders and 20 Nobel Peace Laureates. China is the first permanent member of the UN Security Council to call for a legally binding law, similar to Blinding Laser Weapons, within the CCW.

But why? Why should we ban LAWS in the first place? What are the risks posed by lethal autonomous weapons? In this post, we will look at four reasons why we should ban lethal autonomous weapon systems (LAWS) worldwide.

Predictability & reliability

Fully autonomous weapons systems remain a source of fear, mostly because LAWS can contain inherent imperfection and can never be entirely predictable or reliable. There is a level of uncertainty in LAWS, especially with technologies such as machine learning being used to guide the decision‑making processes within LAWS, leaving that no one can guarantee desirable results. Besides, at a technical level, the question arises whether and how ethical standards and international law could be incorporated into the algorithms guiding the weapons systems. Experts argue that the technology needs to have a certain level of trust and confidence before it can be used for military purposes. Highly unpredictable weapons systems are most likely not to be used if they can not guarantee their successful outcome. These weapons could be highly dangerous because of their nature, particularly in their interaction with other autonomous systems and if they are capable of self-learning.

Arms race and proliferation

Many fear that the development of LAWS might lead to a global arms race with nations who might not be able to prevent proliferation over time as the technology is relatively cheap and simple to copy. This increases proliferation risks and thus might enable dictators, nonstate armed actors or terrorists to acquire fully autonomous weapons. As fully autonomous weapons react and interact with each other at speeds beyond human control, these weapons could also lead to accidental and rapid escalation of conflict.

In addition to their proliferation, some people raise concerns about their domestic use against populations and by terrorist groups. In 2014, AI specialist Steve Omohundro warned that “an autonomous weapons arms race is already taking place. Elon Musk and Stephen Hawking signed the “Lethal Autonomous Weapons Pledge” in 2018, calling for a global ban on autonomous weapons. In 2017, the Future of Life Institute had organized an even bigger pledge, cautioning the start of a potential arms race between the global powers.

Humanity in conflict: Ethical concerns

Many argued that the machine is unable to replace human judgment and should not be allowed to decide life and death. Making such decisions requires human attributes such as compassion and intuition, which the robots do not possess. Giving robots to decide on human life goes against the principles of human dignity and the right to life. This decision cannot be left to an algorithm. Outsourcing this decision would mean outsourcing morality, they argue. LAWS is perhaps good at making quick and precise decisions, but it is not as good as human judgment in evaluating contexts.

Responsibility and accountability

Fully autonomous weapons create an accountability vacuum regarding who is responsible for an unlawful act. If an autonomous weapon carries out a lethal attack, who is responsible for this attack? The robot, the developer, or the military commander? As LAWS encompasses many nodes in a military chain of responsibility, it might be challenging to pinpoint who’s accountable, and there are fears that unclear accountability could lead to impunity. Law is addressed to humans, and the legal responsibility accountability cannot be transferred to a machine.

Related Stories