
Sign up to save your podcasts
Or


“The things we can say are limited by the things we can think.”
In this episode, Samraj Matharu speaks with Peter Danenberg, Senior Software Engineer specialising in rapid LLM prototyping at Google DeepMind, based in Palo Alto, California.
Peter works at the frontier where large language models move from research to real-world systems. Together, they explore what it really means to think with AI — not to outsource thinking to machines, but to use them as tools that challenge, pressure-test, and refine human judgment.
The conversation goes beyond model performance into philosophy, ethics, and cognition. Peter reflects on why intelligence is not the same as thinking, how critical thinking emerges from moments of crisis, and why philosophy remains the underlying language of reasoning in an age of automation.
They examine our instinct to anthropomorphise AI — questioning whether this is a flaw or an evolutionary feature — and discuss why ethics in LLM development has largely focused on harm reduction rather than human flourishing. The episode also introduces the idea of peirastic AI: systems designed not to reassure users, but to test and sharpen their thinking.
This is a long-form, reflective conversation about judgment, responsibility, and the limits of automation — and what still belongs, fundamentally, to humans.
EPISODE HIGHLIGHTS
0:00 ➤ Intro / Guest welcome
4:00 ➤ Peter’s role at DeepMind and rapid LLM prototyping
9:30 ➤ What “thinking with AI” really means
15:00 ➤ Intelligence vs thinking: where people get confused
22:00 ➤ Philosophy as the language of thinking
30:00 ➤ Critical thinking, crisis, and discernment
38:00 ➤ Anthropomorphising AI: bug or feature?
47:00 ➤ Ethics in LLMs and the limits of harm reduction
56:00 ➤ Automation, judgment, and human responsibility
1:05:00 ➤ Peirastic AI: systems that test us
1:15:00 ➤ Interfaces, embodiment, and tactile thinking
1:26:00 ➤ What can’t be automated
1:36:00 ➤ Closing reflections and audience question
🔗 LISTEN, WATCH & CONNECT
🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum
🌐 Join the 1K+ Community: https://linktr.ee/theailyceum
💻 Website: https://theailyceum.com
▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum
🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
ABOUT THE AI LYCEUM
The AI Lyceum is a global community exploring AI, ethics, creativity, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University.
#ai #genai #llm #google #deepmind #aiethics #ethics #philosophy #thinking #criticalthinking #automation #humanjudgment #agenticai #responsibleai #theailyceum
By Samraj Matharu“The things we can say are limited by the things we can think.”
In this episode, Samraj Matharu speaks with Peter Danenberg, Senior Software Engineer specialising in rapid LLM prototyping at Google DeepMind, based in Palo Alto, California.
Peter works at the frontier where large language models move from research to real-world systems. Together, they explore what it really means to think with AI — not to outsource thinking to machines, but to use them as tools that challenge, pressure-test, and refine human judgment.
The conversation goes beyond model performance into philosophy, ethics, and cognition. Peter reflects on why intelligence is not the same as thinking, how critical thinking emerges from moments of crisis, and why philosophy remains the underlying language of reasoning in an age of automation.
They examine our instinct to anthropomorphise AI — questioning whether this is a flaw or an evolutionary feature — and discuss why ethics in LLM development has largely focused on harm reduction rather than human flourishing. The episode also introduces the idea of peirastic AI: systems designed not to reassure users, but to test and sharpen their thinking.
This is a long-form, reflective conversation about judgment, responsibility, and the limits of automation — and what still belongs, fundamentally, to humans.
EPISODE HIGHLIGHTS
0:00 ➤ Intro / Guest welcome
4:00 ➤ Peter’s role at DeepMind and rapid LLM prototyping
9:30 ➤ What “thinking with AI” really means
15:00 ➤ Intelligence vs thinking: where people get confused
22:00 ➤ Philosophy as the language of thinking
30:00 ➤ Critical thinking, crisis, and discernment
38:00 ➤ Anthropomorphising AI: bug or feature?
47:00 ➤ Ethics in LLMs and the limits of harm reduction
56:00 ➤ Automation, judgment, and human responsibility
1:05:00 ➤ Peirastic AI: systems that test us
1:15:00 ➤ Interfaces, embodiment, and tactile thinking
1:26:00 ➤ What can’t be automated
1:36:00 ➤ Closing reflections and audience question
🔗 LISTEN, WATCH & CONNECT
🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum
🌐 Join the 1K+ Community: https://linktr.ee/theailyceum
💻 Website: https://theailyceum.com
▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum
🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
ABOUT THE AI LYCEUM
The AI Lyceum is a global community exploring AI, ethics, creativity, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University.
#ai #genai #llm #google #deepmind #aiethics #ethics #philosophy #thinking #criticalthinking #automation #humanjudgment #agenticai #responsibleai #theailyceum