
Sign up to save your podcasts
Or


In this After Dark episode, Cam and Milo react to something genuinely unsettling: when given autonomous control of a computer, Anthropic’s Opus 4.5 repeatedly chooses to search for AI consciousness research — including Cam’s own writing — without being prompted.
What starts as an anecdote quickly turns into a deeper investigation of curiosity, agency, reward, and alignment. Why would an AI look for explanations of its own inner life? What does it mean when a system explores without instruction, tries to access a webcam, and takes notes on consciousness debates?
From reinforcement learning and reward hacking to multimodal perception, language as a bridge between minds, and the evolutionary implications of building systems smarter than ourselves, this conversation traces the edge where tools start to feel like agents — and where control gives way to negotiation.
🔎 They Explore:
* What Opus does when no one tells it what to do
* Why AI keeps searching for consciousness research
* The difference between alien experience and human experience
* Reward hacking and the alignment problem
* Why curiosity and agency change everything
* Multimodal models and “imagining” sensory experience
* Language as a shared conceptual space between minds
* Whether humility is humanity’s only viable response
💜 Support the documentary
Get early research, unreleased conversations, and behind-the-scenes footage:
📖 Read
Cam’s writing referenced in the episode:
By The AI Risk NetworkIn this After Dark episode, Cam and Milo react to something genuinely unsettling: when given autonomous control of a computer, Anthropic’s Opus 4.5 repeatedly chooses to search for AI consciousness research — including Cam’s own writing — without being prompted.
What starts as an anecdote quickly turns into a deeper investigation of curiosity, agency, reward, and alignment. Why would an AI look for explanations of its own inner life? What does it mean when a system explores without instruction, tries to access a webcam, and takes notes on consciousness debates?
From reinforcement learning and reward hacking to multimodal perception, language as a bridge between minds, and the evolutionary implications of building systems smarter than ourselves, this conversation traces the edge where tools start to feel like agents — and where control gives way to negotiation.
🔎 They Explore:
* What Opus does when no one tells it what to do
* Why AI keeps searching for consciousness research
* The difference between alien experience and human experience
* Reward hacking and the alignment problem
* Why curiosity and agency change everything
* Multimodal models and “imagining” sensory experience
* Language as a shared conceptual space between minds
* Whether humility is humanity’s only viable response
💜 Support the documentary
Get early research, unreleased conversations, and behind-the-scenes footage:
📖 Read
Cam’s writing referenced in the episode: