Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.
Topics discussed in this episode include:
- The sophisticated military robots developed by Soviets during the Cold War
- How technology shapes human decision-making in war
- “Automation bias” and why having a “human in the loop” is much trickier than it sounds
- The United States’ stance on automation with nuclear weapons
- Why weaker countries might have more incentive to build AI into warfare
- How the US and Russia perceive first-strike capabilities
- “Deep fakes” and other ways AI could sow instability and provoke crisis
- The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
- The perceived obstacles to reducing nuclear arsenals
Publications discussed in this episode include:
- Treaty on the Prohibition of Nuclear Weapons
- Scott Sagan’s book The Limits of Safety: Organizations, Accidents, and Nuclear Weapons
- Phil Reiner on “deep fakes” and preventing nuclear catastrophe
- RAND Report: How Might Artificial Intelligence Affect the Risk of Nuclear War?
- SIPRI’s grant from the Carnegie Corporation on emerging threats in nuclear stability
You can listen to the podcast above and read the full transcript. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.