From the A bomb to the AI bomb, nuclear weapons’ problematic evolution via France 24

At 2:26 A.M. on June 3, 1980, Zbigniew Brezezinski, US President Jimmy Carter’s famously hawkish national security adviser, received a terrifying phone call: 220 Soviet nuclear missiles were heading for the US. A few minutes later, another phone call offered new information: in reality, 2,220 missiles were flying towards the US.

Eventually, as Brezezinski was about to warn Carter of the impending doom, military officials realised that it was a gargantuan false alarm caused by a malfunctioning automated warning system. Thus, the Cold War nearly became an apocalypse because of a computer component not working properly.

This was long before artificial intelligence (AI) rose to prominence. But the Americans and Soviets had already begun to introduce algorithms into their control rooms in order to make their nuclear deterrence more effective. However, several incidents – most notably that of June 3, 1980 – show the disadvantages of using AI.

[…]

The dark side of AI in nuclear weapons

There is, however, a very dark side to AI. By nature, it implies the delegation of decision-making from humans to machines – which would carry serious “moral and ethical” implications, noted Page Stoutland, vice-president of the American NGO Nuclear Threat Initiative, which collaborated in the SIPRI report.

On this basis, “the guiding principle of respect for human dignity dictates that machines should generally not be making life-or-death decisions”, argued Frank Sauer, a nuclear weapons specialist at the University of Munich, in the SIRI study. “Countries need to take a clear stance on this” so that they don’t have robotic hands on the red button.

That’s while algorithms are created by humans and, as such, can reinforce the prejudices of their creators. In the US, AI used by the police to prevent reoffending has been shown to be “racist” by several studies. “It is therefore impossible to exclude a risk of inadvertent escalation or at least of instability if the algorithm misinterprets and misrepresents the reality of the situation,” pointed out Jean-Marc Rickli, a researcher at the Geneva Centre for Security Policy, in the SIRI report.

Risk of accidental use

Artificial intelligence also risks upsetting the delicate balance between the nuclear powers, warned Michael Horowitz, a defence specialist at the University of Pennsylvania, in the SIRI study: “An insecure nuclear-armed state would therefore be more likely to automate nuclear early-warning systems, use unmanned nuclear delivery platforms or, due to fear of rapidly losing a conventional war, adopt nuclear launch postures that are more likely to lead to accidental nuclear use or deliberate escalation.” That means that the US – which boasts the world’s largest nuclear stockpile – will be more cautious in adopting AI than a minor nuclear power such as Pakistan.

In short, artificial intelligence is a double-edged sword when applied to nuclear weapons. In certain respects, it could help to make the world safer. But it needs to be adopted “in a responsible way, and people needs to take time to identify the risks associated with AI, as well as pre-emptively solving its problems”, Boulanin concluded.

One sobering comparison might be with the financial services industry. Bankers used the same arguments – the promises of speed and reliability – to introduce AI to the sector as those used by its advocates in the nuclear weapons field. Yet the use of AI in trading rooms has led to some very unpleasant stock market crashes. And of course, nuclear weapons will give AI much more to play with than mere money.

Read more.

This entry was posted in *English and tagged , . Bookmark the permalink.

Leave a Reply