For Humanity: An AI Risk Podcast

Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast


Listen Later

FULL INTERVIEW STARTS AT (00:22:26)


Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast 


e/acc: Suicide or Salvation? In episode #23, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They talk about whether AI should align with human values and the potential consequences of alignment. Paul has some wild views, including that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI. 


This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


TIMESTAMPS:


TRAILER (00:00:00)


INTRO: (00:5:40)


INTERVIEW: 


Paul Luzinski Interview (00:22:36) John Sherman interviews AI advocate Luzinski.


YouTube Channel Motivation (00:24:14) Luzinski's pro-acceleration channel reasons.


AI Threat Viewpoint (00:28:24) Luzinski on AI as existential threat.


AI Impact Minority Opinion (00:32:23) Luzinski's take on AI's minority view impact.


Tech Regulation Need (00:33:03) Regulatory oversight on tech startups debated.


Post-2008 Financial Regulation (00:34:16) Financial regulation effects and big company influence discussed.


Tech CEOs' Misleading Claims (00:36:31) Tech CEOs' public statement intentions.


Social Media Influence (00:38:09) Social media's advertising effectiveness.


AI Risk Speculation (00:41:32) Potential AI risks and regulatory impact.


AI Safety Movement Integrity (00:43:53) AI safety movement's motives challenged.


AI Alignment: Business or Moral? (00:47:27) AI alignment as business or moral issue.


AI Doomsday Believer Types (00:53:27) Four types of AI doomsday believers.


AI Doomsday Belief Authenticity (00:54:22) Are AI doomsday believers genuine?


Geoffrey Hinton's AI Regret (00:57:24) Hinton's regret over AI work.


AI's Self-Perception (00:58:57) Will AI see itself as part of humanity?


AGI's Conditioning Debate (01:00:22) AGI's training vs. human-like start.


AGI's Independent Decisions (01:11:33) Risks of AGI's autonomous actions.


AGI's View on Humans (01:15:47) AGI's potential post-singularity view of humans.


AI Safety Criticism (01:16:24) Critique of AI safety assumptions.


AI Engineers' Concerns (01:19:15) AI engineers' views on AI's dangers.


AGI's Training Impact (01:31:49) Effect of AGI's training data origin.


AI Development Cap (01:32:34) Theoretical limit of AI intelligence.


Intelligence Types (01:33:39) Intelligence beyond academics.


AGI's National Loyalty (01:40:16) AGI's allegiance to its creator nation.


Tech CEOs' Trustworthiness (01:44:13) Tech CEOs' trust in AI development.


Reflections on Discussion (01:47:12) Thoughts on the AI risk conversation.


Next Guest & Engagement (01:49:50) Introduction of next guest and call to action.


RESOURCES:


Paul’s Nutty Youtube Channel: Accel News Network


Best Account on Twitter: AI Notkilleveryoneism Memes 


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 3pm EST


  / discord  


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS





This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,120 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,063 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,609 Listeners

Practical AI by Practical AI LLC

Practical AI

209 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

497 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

474 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

186 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

541 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

559 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

300 Listeners