
Sign up to save your podcasts
Or


In Episode #16 TRAILER, AI Risk Denier Down, things get weird.
This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
https://www.understandingai.org/p/why-im-not-worried-about-ai-taking
Tim was not prepared to discuss this work, which is when things started to get off the rails.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
By The AI Risk Network4.4
88 ratings
In Episode #16 TRAILER, AI Risk Denier Down, things get weird.
This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
https://www.understandingai.org/p/why-im-not-worried-about-ai-taking
Tim was not prepared to discuss this work, which is when things started to get off the rails.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

32,090 Listeners

1,065 Listeners

8,505 Listeners

213 Listeners

89 Listeners

488 Listeners

474 Listeners

186 Listeners

539 Listeners

208 Listeners

560 Listeners

135 Listeners

41 Listeners

10 Listeners

297 Listeners