Potential risks of AI in military applications

military

Artificial Intelligence (AI) plays an increasing role in planning and supporting military operations. As a key tool in intelligence and analysis of the enemy’s intelligence, AI can cause a dramatic evolution, perhaps even a transformation, in the character of war.

AI applications, which are frequently referred to as a tool for jobs that are “dull, dirty, and dangerous,” provide a way to avoid endangering human lives or assigning humans to tasks that do not require human creativity. AI systems can also lower logistics and sensing costs while improving communication and transparency in complex systems.

National intelligence, surveillance, and reconnaissance capabilities have benefited significantly from AI-enabled systems and platforms. AI’s ability to assist in capturing, processing, storing, and analyzing visual and digital data has increased the quantity, quality, and accuracy of data available to decision-makers. They can use this data for everything from equipment maintenance optimization to minimizing civilian harm.

Potential Risks of Military AI

Artificial intelligence (AI) has the potential to change warfare in both positive and negative ways. It’s easy to think of AI technologies as primarily facilitating offensive operations, but they’ll also be useful for defensive operations. Because AI is a general-purpose technology, how it alters the offense-defense balance in various areas will vary depending on the specific application of AI and may evolve.

The following are some general characteristics of AI and associated risks but keep in mind that these are only possibilities. Technology does not determine fate, and states can choose whether or not to use AI technology. What choices states make will determine how these risks manifest. A concerted effort to avoid these dangers may be successful.

1. Accident Risk

In theory, automation has the potential to improve warfare precision and command and control over military forces, reducing civilian casualties and the risk of unintended escalation. Commercial airline autopilots have improved safety, and self-driving cars will follow suit in time. However, the difficulty in developing safe and reliable self-driving cars in all weather and driving conditions highlights AI’s current limitations. Driving or commercial flying is far less complex and adversarial than war.

Another issue militaries face a lack of data on the battlefield environment. Waymo has driven over 10 million miles on public roads to develop self-driving cars that withstand various driving conditions. It’s also a computer that simulates 10 million miles of driving every day. Waymo can now test its cars in a variety of environments. The issue for militaries is that they have very little ground-truth data about wartime conditions to evaluate their systems. Militaries can put their AI systems to the test in real-world or digital simulation environments. They won’t be able to test their actual performance in real-world scenarios until wartime. Fortunately, wars are a rare occurrence. However, this creates a problem when it comes to testing autonomous systems. In peacetime, militaries can try to replicate real operational conditions as closely as possible, but they will never be able to fully recreate the chaos and violence of war. Humans are adaptable and expected to innovate in times of war based on their prior training.

On the other hand, machine intelligence is not as adaptable as human intelligence. There’s a chance that military AI systems will perform well in training but fail in combat because the environment or operational context is different, even if only slightly. Failures could result in accidents or render military systems ineffective.

Accidents involving military systems could be disastrous. They have the potential to kill civilians or escalate a conflict unintentionally. Even if humans regained control, an incident in which enemy troops are killed could heighten tensions and inflame public opinion. National leaders found it difficult to back down from a crisis. Accidents, as well as vulnerabilities to hacking, could jeopardize crisis stability and complicate international escalation management.

2. Autonomy and Predelegated Authority

Even if AI systems perform flawlessly, nations may face difficulties predicting what actions they might want to take in a crisis. Humans delegate authority for certain actions to machines when deploying autonomous systems. The problem is that leaders might take a different approach in a real crisis. During the Cuban Missile Crisis, US leaders decided that if the Soviet Union shot down a US reconnaissance plane over Cuba, they would attack. They changed their minds after the plane was shot down. Projection bias is a cognitive flaw in which people fail to accurately predict their preferences in future situations. The danger is that autonomous systems will perform as programmed but not how human leaders want, potentially escalating crises or conflicts.

3. Prediction and Overtrust in Automation

Keeping humans informed and limiting AI systems to only giving advice is not a cure-all for these dangers. Automation bias is the tendency for humans to place too much faith in machines. In 2003, humans were involved in two fratricide incidents involving the highly automated US Patriot air and missile defense system, but they could not prevent the deaths. Even after being told the robot was broken, participants in one famous psychological experiment followed a robot the wrong way through a smoke-filled building simulating a fire emergency.

Even before a war begins, putting too much faith in machines could lead to accidents and miscalculations. The Soviet Union conducted Operation RYaN in the 1980s to warn of a surprise nuclear attack by the United States. The intelligence program monitored data on a variety of potential attack indicators, including the amount of blood in blood banks, the location of nuclear weapons and key decision-makers, and the activities of national leaders. This could be stabilizing if AI systems could provide accurate early warning of a surprise attack. Knowing that a surprise attack would be impossible, nations would be less likely to attempt one. However, prediction algorithms are only as good as the data used to train them. There simply isn’t enough data to determine what is indicative of an attack for rare events like a surprise attack. Data that is incorrect will result in incorrect analysis. However, the black-box nature of AI, with its internal reasoning hidden from human users, can obscure these issues. Human users may not be able to see that the algorithm’s analysis has gone wrong if there isn’t enough transparency to understand how it works.

4. Nuclear Stability Risks

All of these dangers are particularly serious in the case of nuclear weapons, where mishaps, delegated authority, or overconfidence in automation can have disastrous consequences. For example, false alarms in nuclear early warning systems could result in disaster. There have been numerous nuclear false alarms and safety lapses with nuclear weapons throughout the Cold War. A Soviet early warning satellite system called Oko falsely detected a launch of five US intercontinental ballistic missiles against the Soviet Union in one notable incident in 1983. The satellites detected sunlight reflected off cloud tops, but the automated system signaled “missile launch” to human operators. According to Soviet Lieutenant Colonel Stanislav Petrov, the system was malfunctioning, but the complexity and opacity of AI systems could lead human operators to overtrust them in future false alarms. Other aspects of nuclear operations that use AI or automation could also be dangerous. Accidents involving nuclear-armed unmanned aircraft (drones) could result in states losing control of their nuclear payloads or inadvertently signaling escalation to an adversary.