For Humanity: An AI Risk Podcast

Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast


Listen Later

Episode #27  - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast


Please Donate Here To Help Promote This Show


https://www.paypal.com/paypalme/forhumanitypodcast


In episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.


This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH


https://pauseai.info/2024-may


TIMESTAMPS:


The protest at OpenAI (00:00:00) Discussion on the non-violent protest at the OpenAI headquarters and the response from the employees.


The Road Trip to Protest (00:09:31) Description of the road trip to San Francisco for a protest at OpenAI, including a video of the protest and interactions with employees.


Formation of the World Pause Coalition (00:15:07) Introduction to the World Pause Coalition and its mission to raise awareness about AI and superintelligence.


Challenges and Goals of Protesting (00:18:31) Exploration of the challenges and goals of protesting AI risks, including education, government pressure, and environmental impact.


The smaller countries' stakes (00:22:53) Highlighting the importance of smaller countries' involvement in AI safety negotiations and protests.


San Francisco protest (00:25:29) Discussion about the experience and impact of the protest at the OpenAI headquarters in San Francisco.


Interactions with OpenAI workers (00:26:56) Insights into the interactions with OpenAI employees during the protest, including their responses and concerns.


Different approaches to protesting (00:41:33) Exploration of peaceful protesting as the preferred approach, contrasting with more extreme methods used by other groups.


Embrace Safe AI (00:43:47) Discussion about finding a position for the company that aligns with concerns about AI and the need for safe AI.


Suffering Risk (00:48:24) Exploring the concept of suffering risk associated with superintelligence and the potential dangers of AGI.


Religious Leaders' Role (00:52:39) Discussion on the potential role of religious leaders in raising awareness and mobilizing support for AI safety.


Personal Impact of AI Concerns (01:03:52) Reflection on the personal weight of understanding AI risks and maintaining hope for a positive outcome.


Finding Catharsis in Taking Action (01:08:12) How taking action to help feels cathartic and alleviates the weight of the issue.


Weighing the Impact on Future Generations (01:09:18) The heavy burden of concern for future generations and the motivation to act for their benefit.


RESOURCES:


Best Account on Twitter: AI Notkilleveryoneism Memes 


https://twitter.com/AISafetyMemes


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 2pm EST


  / discord  


https://discord.com/invite/pVMWjddaW7


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS


https://www.safe.ai/work/statement-on-ai-risk



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,106 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,065 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,554 Listeners

Practical AI by Practical AI LLC

Practical AI

210 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

500 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

474 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

186 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

541 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

562 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

135 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

297 Listeners