For Humanity: An AI Risk Podcast

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast


Listen Later

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern.


They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like. 


Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. The conversation delves into the dangers of AI and the need for AI safety. The speakers discuss the potential risks of creating superintelligent AI that could harm humanity. They highlight the ethical concerns of creating AI that could suffer and the moral responsibility we have towards these potential beings. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety.


This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


TIMESTAMPS:


AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety.


Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans.


AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities.


Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate.


AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks.


AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges.


AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority.


Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety.


Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI.


Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities.


AI Containment Risks (00:32:19) The problem of effectively containing AI.


AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities.


AI Dangers (00:34:20) Potential ethical and safety risks of AI.


AI Ethical Concerns (00:37:03) Ethical considerations in AI development.


Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work.


AI Safety Donations (00:41:53) Guidance on supporting AI safety financially.


Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety.


AI Safety Complexity (00:52:12) The intricate nature of AI safety issues.


AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence.


AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts.


AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety.


Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness.


AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety.


AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy.


Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety.


AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks.


RESOURCES:


Nonlinear: https://www.nonlinear.org/


Best Account on Twitter: AI Notkilleveryoneism Memes 


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 3pm EST


  / discord  


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAISco





This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,130 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,064 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,625 Listeners

Practical AI by Practical AI LLC

Practical AI

208 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

500 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

476 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

188 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

542 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

561 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

40 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

303 Listeners