For Humanity: An AI Risk Podcast

Episode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast


Listen Later

Episode #25  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast


FULL INTERVIEW STARTS AT (00:08:20)


DONATE HERE TO HELP PROMOTE THIS SHOW


https://www.paypal.com/paypalme/forhumanitypodcast


In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction.


Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures.


This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


TIMESTAMPS:


**The definition of human extinction and AI Safety Podcast Introduction (00:00:00)**


**Paul Christiano's perspective on AI risks and debate on AI safety (00:03:51)**


**Interview with Dr. Emil Torres on transhumanism, AI safety, and historical perspectives (00:08:17)**


**Challenges to AI safety concerns and the speculative nature of AI arguments (00:29:13)**


**AI's potential catastrophic risks and comparison with climate change (00:47:49)**


**Defining intelligence, AGI, and unintended consequences of AI (00:56:13)**


**Catastrophic Risks of Advanced AI and perspectives on AI Safety (01:06:34)**


**Inconsistencies in AI Predictions and the Threats of Advanced AI (01:15:19)**


**Curiosity in AGI and the ethical implications of building superintelligent systems (01:22:49)**


**Challenges of discussing AI safety and effective tools to convince the public (01:27:26)**


**Tangible harms of AI and hopeful perspectives on the future (01:37:00)**


**Parental instincts and the need for self-sacrifice in AI risk action (01:43:53)**




RESOURCES:


THE TWO MAIN PAPERS ÉMILE LOOKS TO MAKING HIS CASE:


Against the singularity hypothesis By David Thorstad: 


https://philpapers.org/archive/THOATS-5.pdf


Challenges to the Omohundro—Bostrom framework for AI motivations By Oleg Häggstrom: https://www.math.chalmers.se/~olleh/ChallengesOBframeworkDeanonymized.pdf


Paul Christiano on Bankless


How We Prevent the AI’s from Killing us with Paul Christiano


Emile Torres TruthDig Articles:


https://www.truthdig.com/author/emile-p-torres/


https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065


Best Account on Twitter: AI Notkilleveryoneism Memes 


https://twitter.com/AISafetyMemes


JOIN THE FIGHT, help Pause AI!!!!


Pause AI





This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,130 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,064 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,625 Listeners

Practical AI by Practical AI LLC

Practical AI

208 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

500 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

476 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

188 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

542 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

561 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

303 Listeners