LessWrong (30+ Karma)

“When does competition lead to recognisable values?” by Jan_Kulveit, beren, David Duvenaud, Raymond Douglas


Listen Later

Transcript of Beren Millidge's Keynote at The Post-AGI Workshop, San Diego, December 2025



You know how human values might survive in a very multifarious AI world where there's lots of AIs competing? This is the kind of MOLOCH world that Scott Alexander talks about. And then I realized that to talk about this, I've got to talk about a whole lot of other things as well—hence the many other musings here. So this is probably going to be quite a fast and somewhat dense talk. Let's get started. It should be fun.

Two Visions of AI Futures

The way I think about AI futures kind of breaks down into two buckets. I call them AI monotheism and AI polytheism.

AI Monotheism

The standard LessWrong/Yudkowsky-style story is: we develop an AI, it does recursive self-improvement, it becomes vastly more intelligent and smarter than all the other AIs, and then it gets all the power in the universe. It eats the light cone, and then what we do to align it really matters.

If we align it successfully, we basically create God. God is already aligned to humans, everyone lives a wonderful life, happily ever after. On the other [...]



---

Outline:

(00:49) Two Visions of AI Futures

(01:14) AI Monotheism

(02:13) AI Polytheism

(03:06) Meditations on Moloch

(04:22) Does Malthusianism Really Destroy All Values?

(04:36) The Natural World as Evidence

(05:48) Why Not Uber-Organisms?

(06:51) Frequency-Dependent Selection

(07:50) The Nature of Human Values

(08:06) Values Arent Arbitrary

(09:03) The Role of Slack

(09:37) Pro-Social Values Emerge from Competition

(10:25) Defection and Cooperation

(11:14) How Human Are Human Values?

(11:42) Universal Drives

(13:19) Cooperation Is Not Unique to Humans

(13:44) Abstract Values and Culture

(15:27) How Values Emerge: RL + Unsupervised Learning

(17:26) Why This Matters for Alignment

(18:25) Conditions for Value Evolution

(18:55) Conditions for Human Values

(22:37) Will AIs Meet These Conditions?

(22:48) Potential Issues

(25:46) Hyper-Competitors or Hyper-Cooperators?

(26:06) The Hyper-Competitor View

(26:35) The Hyper-Cooperator View

(27:10) Why AI Cooperation Could Be Superior

(31:23) The Multicellular Transition

(31:32) Why Empires Dont Grow Forever

(33:04) Removing Coordination Costs

(34:24) Super-Minds

(35:09) Is This Just Recreating the Singleton?

(36:39) Values of the Super-Mind

(38:21) Slime Mold Dynamics

(39:11) Extreme Specialization

(39:58) Physical Limits of Super-Minds

(40:16) Speed of Thought

(41:17) Colonization and Alignment

(42:47) Mind Cancer

(43:32) Implications for Alignment

(44:12) Population Statistics

(44:59) Overlapping Values

(45:52) Integrating Humans

(47:08) Political Philosophy Questions

---

First published:

January 12th, 2026

Source:

https://www.lesswrong.com/posts/LwSRbkecuqLJHdnJ7/when-does-competition-lead-to-recognisable-values

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,078 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

132 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,273 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

559 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,487 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners