
Sign up to save your podcasts
Or


What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Patreon: patreon.com/axrpodcast
Ko-fi: ko-fi.com/axrpodcast
The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html
Topics we discuss, and timestamps:
0:00:47 - What is active inference?
0:15:14 - Preferences in active inference
0:31:33 - Action vs perception in active inference
0:46:07 - Feedback loops
1:01:32 - Active inference vs LLMs
1:12:04 - Hierarchical agency
1:58:28 - The Alignment of Complex Systems group
Website of the Alignment of Complex Systems group (ACS): acsresearch.org
ACS on X/Twitter: x.com/acsresearchorg
Jan on LessWrong: lesswrong.com/users/jan-kulveit
Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215
Other works we discuss:
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959
Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem
Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/
Episode art by Hamish Doodles: hamishdoodles.com
By Daniel Filan4.4
88 ratings
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Patreon: patreon.com/axrpodcast
Ko-fi: ko-fi.com/axrpodcast
The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html
Topics we discuss, and timestamps:
0:00:47 - What is active inference?
0:15:14 - Preferences in active inference
0:31:33 - Action vs perception in active inference
0:46:07 - Feedback loops
1:01:32 - Active inference vs LLMs
1:12:04 - Hierarchical agency
1:58:28 - The Alignment of Complex Systems group
Website of the Alignment of Complex Systems group (ACS): acsresearch.org
ACS on X/Twitter: x.com/acsresearchorg
Jan on LessWrong: lesswrong.com/users/jan-kulveit
Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215
Other works we discuss:
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959
Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem
Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/
Episode art by Hamish Doodles: hamishdoodles.com

26,377 Listeners

2,430 Listeners

1,083 Listeners

107 Listeners

112,351 Listeners

211 Listeners

9,799 Listeners

89 Listeners

489 Listeners

5,468 Listeners

132 Listeners

16,152 Listeners

97 Listeners

209 Listeners

131 Listeners