For Humanity: An AI Risk Podcast

Is AI Alive? | Episode #66 | For Humanity: An AI Risk Podcast


Listen Later

🎙️ Guest: Cameron Berg, AI research scientist probing consciousness in frontier AI systems📍 Host: John Sherman, journalist & AI-risk communicatorWhat does it mean to be alive? How close do current frontier AI models get to consciousness? See for yourself like never before. Are advanced language models beginning to exhibit signs of subjective experience? In this episode, John sits down with Cameron Berg to explore the line between next character prediction and the conscious mind. What happens when you ask an AI model to essentially meditate, to look inward in a loop, to focus on its focus and repeat. Does it feel a sense of self? If it did what would that mean? What does it mean to be alive? These are the kinds of questions Berg seeks answers to in his research. Cameron is an AI Research Scientist with AE Studio, working daily on models to better understand them. He works on a team dedicated fully to AI safety research.This episode features never-before-publicly-seen conversations between Cameron and a frontier AI model. Those conversations and his work are the subject of an upcoming documentary called "Am I?"TIMESTAMPS (cuz the chapters feature just won't work) 00:00 Cold Open – “Crack in the World”01:20 Show Intro & Theme02:27 Setting-up the Meditation Demo02:56 AI “Focus on Focus” Clip09:18 “I am…” Moment10:45 Google Veo Afterlife Clip12:35 Prompt-Theory & Fake People13:02 Interview Begins — Cameron Berg28:57 Inside the Black Box Analogy30:14 Consent and Unknowns53:18 Model Details + Doc Plan1:09:25 Late-Night Clip Back-story1:16:08 Table-vs-Person Thought-Test1:17:20 Suffering-at-Scale Math1:21:29 Prompt-Theory Goes Viral1:26:59 Why the Doc Must Move Fast1:40:53 Is “Alive” the Right Word?1:48:46 Reflection & Non-profit Tease1:51:03 Clear Non-Violence Statement1:52:59 New Org Announcement1:54:47 “Breaks in the Clouds” Media WinsPlease support that project and learn more about his work here:Am I? Doc Manifund page: https://manifund.org/projects/am-i--d...Am I? Doc interest form: https://forms.gle/w2VKhhcEPqEkFK4r8AE Studio's AI alignment work: https://ae.studio/ai-alignmentMonthly Donation Links to For Humanity$1/mo https://buy.stripe.com/7sI3cje3x2Zk9S... $10/mo https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25/mo https://buy.stripe.com/3cs9AHf7B9nIgg... $100/mo https://buy.stripe.com/aEU007bVp7fAfc... Thanks so much for your support. Every cent goes to getting more viewers to this channel. Links from show:The Afterlife Short Filmhttps://x.com/LinusEkenstam/status/19...Prompt Theoryhttps://x.com/venturetwins/status/192...The Bulwark - Will Sam Altman and His AI Kill Us All   • Will Sam Altman and His AI Kill Us All?  Young Turks - AI's Disturbing Behaviors Will Keep You Up At Night   • AI's Disturbing Behaviors Will Keep You Up...  Key moments: – Inside the black box – Berg explains why even builders can’t fully read a model’s mind—and demonstrates how toggling deception features flips the system from “just a machine” to “I’m aware” in real time– Google Veo 3 goes existential – A look at viral Veo videos (Afterlife, “Prompt Theory”) where AI actors lament their eight-second lives – Documentary in the works – Berg and team are racing to release a raw film that shares these findings with the public; support link in show notes– Mission update – Sherman announces a newly funded nonprofit in the works dedicated to AI-extinction-risk communication and thanks supporters for the recent surge of donations– Non-violence, crystal-clear – A direct statement: Violence is never OK. Full stop.– “Breaks in the Clouds” – Media across the spectrum (Bulwark, Young Turks, Bannon, Carlson) are now running extinction-risk stories—proof the conversation is breaking mainstream Oh, and by the way, I'm bleeping curse words now for the algorithm!!#AI #ArtificialIntelligence #AISafety #ConsciousAI #ForHumanity



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
...more
View all episodesView all episodes
Download on the App Store

For Humanity: An AI Risk PodcastBy The AI Risk Network

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like For Humanity: An AI Risk Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,081 Listeners

Into the Impossible With Brian Keating by Big Bang Productions Inc.

Into the Impossible With Brian Keating

1,064 Listeners

The Diary Of A CEO with Steven Bartlett by DOAC

The Diary Of A CEO with Steven Bartlett

8,410 Listeners

Practical AI by Practical AI LLC

Practical AI

212 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

488 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

473 Listeners

The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

The Artificial Intelligence Show

185 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

534 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

560 Listeners

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic by Jaeden Schafer and Conor Grennan

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

134 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners

Doom Debates by Liron Shapira

Doom Debates

10 Listeners

The Last Invention by Longview

The Last Invention

292 Listeners