
Sign up to save your podcasts
Or


Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #26 TRAILER, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
The surprise of rapid progress in AI (00:00:00) Former OpenAI employee's perspective on the unexpected speed of AI development and its impact on safety.
Concerns about OpenAI's focus on safety (00:01:00) The speaker's decision to start his own company due to the lack of sufficient safety focus within OpenAI and the belief in the inevitability of advancing AI technology.
Differing perspectives on AI risks (00:01:53) Discussion about the urgency and approach to AI development, including skepticism and the limitations of human imagination in understanding AI risks.
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
By The AI Risk Network4.4
88 ratings
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
In episode #26 TRAILER, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
The surprise of rapid progress in AI (00:00:00) Former OpenAI employee's perspective on the unexpected speed of AI development and its impact on safety.
Concerns about OpenAI's focus on safety (00:01:00) The speaker's decision to start his own company due to the lack of sufficient safety focus within OpenAI and the belief in the inevitability of advancing AI technology.
Differing perspectives on AI risks (00:01:53) Discussion about the urgency and approach to AI development, including skepticism and the limitations of human imagination in understanding AI risks.
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

32,124 Listeners

1,063 Listeners

8,618 Listeners

211 Listeners

90 Listeners

501 Listeners

476 Listeners

188 Listeners

541 Listeners

208 Listeners

561 Listeners

134 Listeners

40 Listeners

10 Listeners

681 Listeners