
Sign up to save your podcasts
Or


Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
 By The AI Risk Network
By The AI Risk Network4.4
88 ratings
Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

32,106 Listeners

1,065 Listeners

8,554 Listeners

210 Listeners

90 Listeners

491 Listeners

474 Listeners

186 Listeners

541 Listeners

208 Listeners

562 Listeners

134 Listeners

40 Listeners

10 Listeners

297 Listeners