Transcript of Beren Millidge's Keynote at The Post-AGI Workshop, San Diego, December 2025
You know how human values might survive in a very multifarious AI world where there's lots of AIs competing? This is the kind of MOLOCH world that Scott Alexander talks about. And then I realized that to talk about this, I've got to talk about a whole lot of other things as well—hence the many other musings here. So this is probably going to be quite a fast and somewhat dense talk. Let's get started. It should be fun.
Two Visions of AI Futures
The way I think about AI futures kind of breaks down into two buckets. I call them AI monotheism and AI polytheism.
AI Monotheism
The standard LessWrong/Yudkowsky-style story is: we develop an AI, it does recursive self-improvement, it becomes vastly more intelligent and smarter than all the other AIs, and then it gets all the power in the universe. It eats the light cone, and then what we do to align it really matters.
If we align it successfully, we basically create God. God is already aligned to humans, everyone lives a wonderful life, happily ever after. On the other [...]
---
Outline:
(00:49) Two Visions of AI Futures
(01:14) AI Monotheism
(02:13) AI Polytheism
(03:06) Meditations on Moloch
(04:22) Does Malthusianism Really Destroy All Values?
(04:36) The Natural World as Evidence
(05:48) Why Not Uber-Organisms?
(06:51) Frequency-Dependent Selection
(07:50) The Nature of Human Values
(08:06) Values Arent Arbitrary
(09:03) The Role of Slack
(09:37) Pro-Social Values Emerge from Competition
(10:25) Defection and Cooperation
(11:14) How Human Are Human Values?
(11:42) Universal Drives
(13:19) Cooperation Is Not Unique to Humans
(13:44) Abstract Values and Culture
(15:27) How Values Emerge: RL + Unsupervised Learning
(17:26) Why This Matters for Alignment
(18:25) Conditions for Value Evolution
(18:55) Conditions for Human Values
(22:37) Will AIs Meet These Conditions?
(22:48) Potential Issues
(25:46) Hyper-Competitors or Hyper-Cooperators?
(26:06) The Hyper-Competitor View
(26:35) The Hyper-Cooperator View
(27:10) Why AI Cooperation Could Be Superior
(31:23) The Multicellular Transition
(31:32) Why Empires Dont Grow Forever
(33:04) Removing Coordination Costs
(34:24) Super-Minds
(35:09) Is This Just Recreating the Singleton?
(36:39) Values of the Super-Mind
(38:21) Slime Mold Dynamics
(39:11) Extreme Specialization
(39:58) Physical Limits of Super-Minds
(40:16) Speed of Thought
(41:17) Colonization and Alignment
(42:47) Mind Cancer
(43:32) Implications for Alignment
(44:12) Population Statistics
(44:59) Overlapping Values
(45:52) Integrating Humans
(47:08) Political Philosophy Questions
---