For Humanity: An AI Risk Podcast

Episode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast


Listen Later

In Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.


This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


RESOURCES:


Vanity Fair Gushes in 2015


Business Insider: Sam Altman’s Act May Be Wearing Thin


Oprah and Maya Angelou


Best Account on Twitter: AI Notkilleveryoneism Memes 


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 3pm EST


  / discord  


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS


Timestamps:


The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity.


The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks.


Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society.


Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology.


Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board.


Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever.


Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters.


Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications.


The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use.


Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology.


Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects.


Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance.


Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness.


Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life.


Conclusion (00:38:25) Closing remarks and a preview of the next episode.






This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,090 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,065 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,505 Listeners

Practical AI by Practical AI LLC

Practical AI

213 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

488 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

474 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

186 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

539 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

208 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

560 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

135 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

297 Listeners