
Sign up to save your podcasts
Or
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Patreon: patreon.com/axrpodcast
Ko-fi: ko-fi.com/axrpodcast
The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html
Topics we discuss, and timestamps:
0:00:47 - What is active inference?
0:15:14 - Preferences in active inference
0:31:33 - Action vs perception in active inference
0:46:07 - Feedback loops
1:01:32 - Active inference vs LLMs
1:12:04 - Hierarchical agency
1:58:28 - The Alignment of Complex Systems group
Website of the Alignment of Complex Systems group (ACS): acsresearch.org
ACS on X/Twitter: x.com/acsresearchorg
Jan on LessWrong: lesswrong.com/users/jan-kulveit
Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215
Other works we discuss:
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959
Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem
Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/
Episode art by Hamish Doodles: hamishdoodles.com
4.4
88 ratings
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Patreon: patreon.com/axrpodcast
Ko-fi: ko-fi.com/axrpodcast
The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html
Topics we discuss, and timestamps:
0:00:47 - What is active inference?
0:15:14 - Preferences in active inference
0:31:33 - Action vs perception in active inference
0:46:07 - Feedback loops
1:01:32 - Active inference vs LLMs
1:12:04 - Hierarchical agency
1:58:28 - The Alignment of Complex Systems group
Website of the Alignment of Complex Systems group (ACS): acsresearch.org
ACS on X/Twitter: x.com/acsresearchorg
Jan on LessWrong: lesswrong.com/users/jan-kulveit
Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215
Other works we discuss:
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959
Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem
Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/
Episode art by Hamish Doodles: hamishdoodles.com
26,462 Listeners
2,395 Listeners
1,784 Listeners
298 Listeners
105 Listeners
4,142 Listeners
89 Listeners
287 Listeners
88 Listeners
417 Listeners
243 Listeners
75 Listeners
60 Listeners
146 Listeners
123 Listeners