
Sign up to save your podcasts
Or


For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability.
This podcast is your wake-up call, and a real-time, unfolding plan of action.
By The AI Risk Network4.4
88 ratings
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability.
This podcast is your wake-up call, and a real-time, unfolding plan of action.

32,090 Listeners

1,065 Listeners

8,505 Listeners

213 Listeners

89 Listeners

488 Listeners

474 Listeners

186 Listeners

539 Listeners

208 Listeners

560 Listeners

135 Listeners

41 Listeners

10 Listeners

297 Listeners