For Humanity: An AI Risk Podcast

Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast


Listen Later

Please Donate Here To Help Promote For Humanity


https://www.paypal.com/paypalme/forhumanitypodcast


BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.


This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?


This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH


https://pauseai.info/2024-may


TIMESTAMPS:


The meetings with congressional staffers (00:00:00) Akash discusses his experiences and strategies for engaging with congressional staffers and policymakers regarding AI risks and national security threats.


Understanding AI risks and national security (00:00:14) Akash highlights the interest and enthusiasm among policymakers to learn more about AI risks, particularly in the national security space.


Messaging and communication strategies (00:01:09) Akash emphasizes the importance of making less intuitive threat models understandable and getting the time of day from congressional offices.


Emergency preparedness in AI risk (00:02:45) Akash introduces the concept of emergency preparedness in the context of AI risk and its relevance to government priorities.


Preparedness approach to uncertain events (00:04:17) Akash discusses the preparedness approach to dealing with uncertain events and the significance of having a playbook in place.


Prioritizing AI in national security (00:06:08) Akash explains the strategic prioritization of engaging with key congressional offices focused on AI in the context of national security.


Policymaker concerns and China's competitiveness (00:07:03) Akash addresses the predominant concern among policymakers about China's competitiveness in AI and its impact on national security.


AI development and governance safeguards (00:08:15) Akash emphasizes the need to raise awareness about AI research and development misalignment and loss of control threats in the context of China's competitiveness.


RESOURCES:


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 2pm EST


  / discord  


https://discord.com/invite/pVMWjddaW7


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS


https://www.safe.ai/work/statement-on-ai-risk


Best Account on Twitter: AI Notkilleveryoneism Memes 


https://twitter.com/AISafetyMemes





This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,090 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,065 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,500 Listeners

Practical AI by Practical AI LLC

Practical AI

212 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

474 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

185 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

534 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

558 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

293 Listeners