
Sign up to save your podcasts
Or


Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.
In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.
This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
#samaltman #darioamodei #yannlecun #ai #aisafety
By The AI Risk Network4.4
88 ratings
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.
In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.
This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
#samaltman #darioamodei #yannlecun #ai #aisafety

32,090 Listeners

1,065 Listeners

8,505 Listeners

213 Listeners

89 Listeners

488 Listeners

474 Listeners

186 Listeners

539 Listeners

208 Listeners

560 Listeners

135 Listeners

41 Listeners

10 Listeners

297 Listeners