LessWrong (30+ Karma)

“What Is The Alignment Problem?” by johnswentworth


Listen Later

So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like e.g. corrigibility.

That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is the goal again? When we say we want to “align AGI”, what does that mean? And what about these “human values” - it's easy to list things which are importantly not human values (like stated preferences, revealed preferences, etc), but what are we talking about? And don’t even get me started on corrigibility!

Turns out, it's surprisingly tricky to explain what exactly “the alignment problem” refers to. And there's good reasons for that! In this post, I’ll give my current best explanation of what the alignment problem is (including a few variants and the [...]

---

Outline:

(01:27) The Difficulty of Specifying Problems

(01:50) Toy Problem 1: Old MacDonald's New Hen

(04:08) Toy Problem 2: Sorting Bleggs and Rubes

(06:55) Generalization to Alignment

(08:54) But What If The Patterns Don't Hold?

(13:06) Alignment of What?

(14:01) Alignment of a Goal or Purpose

(19:47) Alignment of Basic Agents

(23:51) Alignment of General Intelligence

(27:40) How Does All That Relate To Todays AI?

(31:03) Alignment to What?

(32:01) What are a Humans Values?

(36:14) Other targets

(36:43) Paul!Corrigibility

(39:11) Eliezer!Corrigibility

(40:52) Subproblem!Corrigibility

(42:55) Exercise: Do What I Mean (DWIM)

(43:26) Putting It All Together, and Takeaways

The original text contained 10 footnotes which were omitted from this narration.

---

First published:

January 16th, 2025

Source:

https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,586 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,219 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,096 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners