For Humanity: An AI Risk Podcast

Episode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety Podcast


Listen Later

Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.


This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


TIMESTAMPS:


Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity.


The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism.


Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation.


Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity.


RESOURCES:


Paul’s Youtube Channel: Accel News Network


Best Account on Twitter: AI Notkilleveryoneism Memes 


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 3pm EST


  / discord  


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS






This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,120 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,063 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,609 Listeners

Practical AI by Practical AI LLC

Practical AI

209 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

497 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

474 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

186 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

541 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

559 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

300 Listeners