
Sign up to save your podcasts
Or
This week on Warning Shots, John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) dive into one of the most important AI safety moments yet — the launch of If Anyone Builds It, Everyone Dies, the new book by Eliezer Yudkowsky and Nate Soares.
We discuss why this book could be a turning point in public awareness, what makes its arguments so accessible, and how it could spark both grassroots and political action to prevent catastrophe.
Highlights include:
* Why simplifying AI risk is the hardest and most important task
* How parables and analogies in the book make “doom logic” clear
* What ripple effects one powerful message can create
* The political and grassroots leverage points we need now
* Why media often misses the urgency — and why we can’t
* This isn’t just another episode — it’s a call to action.
* The book launch could be a defining moment for the AI safety movement.
🔗 Links & Resources
🌍 Learn more about AI extinction risk: https://www.safe.ai
📺 Subscribe to our channel for more episodes: https://www.youtube.com/@TheAIRiskNetwork
💬 Follow the hosts:
Liron Shapira (Doom Debates): www.youtube.com/@DoomDebate
Michael (Lethal Intelligence): www.youtube.com/@lethal-intelligence
#AIRisks #AIExtinctionRisk
This week on Warning Shots, John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) dive into one of the most important AI safety moments yet — the launch of If Anyone Builds It, Everyone Dies, the new book by Eliezer Yudkowsky and Nate Soares.
We discuss why this book could be a turning point in public awareness, what makes its arguments so accessible, and how it could spark both grassroots and political action to prevent catastrophe.
Highlights include:
* Why simplifying AI risk is the hardest and most important task
* How parables and analogies in the book make “doom logic” clear
* What ripple effects one powerful message can create
* The political and grassroots leverage points we need now
* Why media often misses the urgency — and why we can’t
* This isn’t just another episode — it’s a call to action.
* The book launch could be a defining moment for the AI safety movement.
🔗 Links & Resources
🌍 Learn more about AI extinction risk: https://www.safe.ai
📺 Subscribe to our channel for more episodes: https://www.youtube.com/@TheAIRiskNetwork
💬 Follow the hosts:
Liron Shapira (Doom Debates): www.youtube.com/@DoomDebate
Michael (Lethal Intelligence): www.youtube.com/@lethal-intelligence
#AIRisks #AIExtinctionRisk