
Sign up to save your podcasts
Or


Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.
Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.
They discuss:
* Why even AI insiders are losing faith in control
* What a “Chernobyl moment” could actually look like
* Why regulation isn’t anti-innovation — it’s survival
* The myth that America is “allergic” to AI rules
* How liability, accountability, and provable safety could still save us
* Whether we can ever truly coexist with a superintelligence
This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.
🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.
📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.
#AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow
By The AI Risk Network4.4
88 ratings
Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.
Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.
They discuss:
* Why even AI insiders are losing faith in control
* What a “Chernobyl moment” could actually look like
* Why regulation isn’t anti-innovation — it’s survival
* The myth that America is “allergic” to AI rules
* How liability, accountability, and provable safety could still save us
* Whether we can ever truly coexist with a superintelligence
This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.
🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.
📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.
#AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow

32,081 Listeners

1,064 Listeners

8,410 Listeners

212 Listeners

89 Listeners

488 Listeners

473 Listeners

185 Listeners

534 Listeners

209 Listeners

560 Listeners

134 Listeners

41 Listeners

10 Listeners

292 Listeners