LessWrong (30+ Karma)

“6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes


Listen Later

Tl;dr

AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things. “Alas, the power-seeking ruthless consequentialist AIs are still coming,” sigh the former. “Just you wait.”

As it happens, I’m basically in that “alas, just you wait” camp, expecting ruthless future AIs. But my camp faces a real question: what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists? I find that existing explanations in the discourse—e.g. “ah but humans just aren’t smart and reflective enough”, or evolved modularity, or shard theory, etc.—to be wrong, handwavy, or otherwise unsatisfying.

So in this post, I offer my own explanation of why “agent foundations” toy models fail to describe humans, centering around a particular non-“behaviorist” [...]

---

Outline:

(00:13) Tl;dr

(03:35) 0. Background

(03:39) 0.1. Human social instincts and Approval Reward

(07:23) 0.2. Hang on, will future powerful AGI / ASI by default lack Approval Reward altogether?

(10:29) 0.3. Where do self-reflective (meta)preferences come from?

(12:38) 1. The human intuition that it's normal and good for one's goals & values to change over the years

(14:51) 2. The human intuition that ego-syntonic desires come from a fundamentally different place than urges

(17:53) 3. The human intuition that helpfulness, deference, and corrigibility are natural

(19:03) 4. The human intuition that unorthodox consequentialist planning is rare and sus

(23:53) 5. The human intuition that societal norms and institutions are mostly stably self-enforcing

(24:01) 5.1. Detour into Security-Mindset Institution Design

(26:22) 5.2. The load-bearing ingredient in human society is not Security-Mindset Institution Design, but rather good-enough institutions plus almost-universal human innate Approval Reward

(29:26) 5.3. Upshot

(30:49) 6. The human intuition that treating other humans as a resource to be callously manipulated and exploited, just like a car engine or any other complex mechanism in their environment, is a weird anomaly rather than the obvious default

(31:13) 7. Conclusion

The original text contained 12 footnotes which were omitted from this narration.

---

First published:

December 3rd, 2025

Source:

https://www.lesswrong.com/posts/d4HNRdw6z7Xqbnu5E/6-reasons-why-alignment-is-hard-discourse-seems-alien-to

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,370 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,450 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

8,708 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,174 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

93 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,599 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,855 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

93 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

507 Listeners

Hard Fork by The New York Times

Hard Fork

5,529 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,019 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

543 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

136 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

94 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

475 Listeners