
Sign up to save your podcasts
Or


In this episode, I’m joined by Erin D. Dumbacher, Stanton Nuclear Security Senior Fellow at the Council on Foreign Relations, to discuss her recent Foreign Affairs article “How Deepfakes Could Lead to Doomsday.” We explore how deepfakes and AI hallucinations could infiltrate the systems designed to prevent nuclear catastrophe — and why the story of Stanislav Petrov, a Soviet officer who in 1983 chose not to trust his screen when it told him American nukes were incoming, matters more now than ever.
We also dig into the Pentagon’s rush to integrate AI across the Department of Defense, why AI trained on nuclear threats has almost no real data to learn from, and why the most dangerous scenario isn’t a machine launching a weapon — it’s bad information cascading through a decision-making chain where the president has minutes, not hours, to act.
We discuss:
* Deepfakes of Zelensky and Putin during the Ukraine war — and what could have gone differently
* The Pentagon’s GenAI.mil rollout and the risks of moving too fast
* AI hallucinations vs. social media deepfakes — two distinct threats to nuclear stability
* The data problem: training AI when we’ve only had two nuclear attacks in history
* How misinformation could trigger cascading crises under compressed nuclear timelines
* Reforms to U.S. nuclear launch authority in an era where information can’t be taken at face value
By Ashmita RajmohanIn this episode, I’m joined by Erin D. Dumbacher, Stanton Nuclear Security Senior Fellow at the Council on Foreign Relations, to discuss her recent Foreign Affairs article “How Deepfakes Could Lead to Doomsday.” We explore how deepfakes and AI hallucinations could infiltrate the systems designed to prevent nuclear catastrophe — and why the story of Stanislav Petrov, a Soviet officer who in 1983 chose not to trust his screen when it told him American nukes were incoming, matters more now than ever.
We also dig into the Pentagon’s rush to integrate AI across the Department of Defense, why AI trained on nuclear threats has almost no real data to learn from, and why the most dangerous scenario isn’t a machine launching a weapon — it’s bad information cascading through a decision-making chain where the president has minutes, not hours, to act.
We discuss:
* Deepfakes of Zelensky and Putin during the Ukraine war — and what could have gone differently
* The Pentagon’s GenAI.mil rollout and the risks of moving too fast
* AI hallucinations vs. social media deepfakes — two distinct threats to nuclear stability
* The data problem: training AI when we’ve only had two nuclear attacks in history
* How misinformation could trigger cascading crises under compressed nuclear timelines
* Reforms to U.S. nuclear launch authority in an era where information can’t be taken at face value