
Many business leaders and experts have warned of the use of AI in a military environment. Elon Musk was one of the 100 AI experts from Tesla and SpaceX who called on the UN to prevent the development of lethal autonomous weapons. Musk warned AI that “an immortal dictator from whom we could not escape” could be created and that technology could lead to a third world war.
While a recent RAND survey of AI and nuclear safety experts states that that AI will be used widely in aids to decision-making in command-and-control platforms, we will examine the possibility that an algorithm can provide compelling evidence that an incoming nuclear alarm is a false alarm, advocating restraint rather than confrontation.
Decision-makers at various levels of the nuclear command chain face two different forms of stress. The first form of pressure in a crisis is the overload of information, time shortages, and chaos. The second is more general, deriving from moral differences and fears of causing enormous loss of life. Techniques of AI and Big Data analysis have been used to address the first stress. The current US Nuclear Early Warning System uses a “dual phenomenology” mechanism to speed up threat detections and simplify decision-making information. The Early Warning System uses advanced satellites and radars to confirm an enemy missile and track it almost immediately following the launch. In a nuclear attack, the various military and administrative personnel in the command chain would be informed gradually as the threat was analyzed until the president was finally notified. This structure significantly reduces decision-makers in a crisis ‘ overload and chaos.
However, the system also reduces the role of the decision-maker to “just support the sensors’ claims and communications systems that a massive attack is indeed on.” Although advanced technologies and data processing techniques in the early warning system reduce the occurrence of false alerts, they do not eliminate the possibility that they will occur. Future AI applications for nuclear command and control should aspire to create an algorithm that might argue that a nuclear war is not happening in the face of an overwhelming fear of an imminent attack. In addition to purely technological analysis, such an algorithm could verify the authenticity of an alert from other diverse perspectives. Incorporating that element into the nuclear warning process may help address the second form of stress and reassure decision-makers about the sanction of a valid, justified course of action.
Nevertheless, these advances in nuclear control are not tackling the second form of stress arising out of fear of nuclear war and the accompanying moral barriers. How can AI reduce this? History reminds us that technological sophistication can not be used to prevent accidental confrontations with atomic weapons. Instead, these confrontations were avoided by individuals who offered alternative warning explanations despite the state-of-the-art technology. Working in the most demanding conditions, they insisted on a “good feeling” of misleading evidence of a nuclear warning imminent. They chose to ignore the established protocol because they feared that a mistake would lead to an accidental nuclear war.
The 1983 nuclear false alarm incident is noteworthy as a cautionary tale of AI’s development after the Cold War. Former Soviet military officer Stanislav Petrov detected a computer warning in 1983 that several missiles had been launched by the USA. The warning was wrong, and the man who saved the world from nuclear devastation was since credited to Petrov who passed away last year.
Fred Iklé once remarked, “if any witness should come here and tell you that a totally reliable and safe launch on warning posture can be designed and implemented that man is a fool.” If so, how close can AI bring us safe and reliable nuclear control? AI-enabled systems can aspire to reduce some of the atomic control’s mechanical and human errors. Prior instances of false alerts and failures in early warning systems should be used to develop benchmarks to quickly test the accuracy of an early warning alert as the AI algorithm training data set. Speed and precision should not be the objective of integrating AI into military systems. It should also help policymakers to exercise judgment and prudence in order to prevent accidental disasters.