For Humanity: An AI Risk Podcast

Episode #35 “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast


Listen Later

In Episode #35  host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.


Gladstone AI Action Plan


https://www.gladstone.ai/action-plan


TIME MAGAZINE ON THE GLADSTONE REPORT


https://time.com/6898967/ai-extinction-national-security-risks-report/


SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!


https://www.youtube.com/@DoomDebates




Please Donate Here To Help Promote For Humanity


https://www.paypal.com/paypalme/forhumanitypodcast




This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 


For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.


Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


For Humanity Theme Music by Josef Ebner


Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg


Website: https://josef.pictures


RESOURCES:


BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!


https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom


JOIN THE FIGHT, help Pause AI!!!!


Pause AI


Join the Pause AI Weekly Discord Thursdays at 2pm EST


  / discord  


https://discord.com/invite/pVMWjddaW7


22 Word Statement from Center for AI Safety


Statement on AI Risk | CAIS


https://www.safe.ai/work/statement-on-ai-risk


Best Account on Twitter: AI Notkilleveryoneism Memes 


https://twitter.com/AISafetyMemes


TIMESTAMPS:


Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations.


Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode.


Doom Debates on YouTube (00:02:17) Promotion of the "Doom Debates" YouTube channel and its content, featuring discussions on AI doom and various perspectives on the topic.


YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared.


OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape.


The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models.


The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development.


Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development.


Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures.


Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets to protect AI technology from exfiltration and the need for a pause in development until labs are secure.


Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology to ensure the responsible development of AI.


OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety and alignment efforts, as well as the departure of a safety-minded board member.


These are the timestamps and topics covered in the podcast episode transcription segment.


China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology.


China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP.


Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47) Explanation of TSMC's role in fabricating advanced semiconductor chips and its impact on the AI race.


US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power.


Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China and the US to address their respective constraints.


Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.





This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,090 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,065 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,448 Listeners

Practical AI by Practical AI LLC

Practical AI

212 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

473 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

185 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

534 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

558 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

293 Listeners