LessWrong (Curated & Popular)

“What is it to solve the alignment problem? ” by Joe Carlsmith


Listen Later

People often talk about “solving the alignment problem.” But what is it to do such a thing? I wanted to clarify my thinking about this topic, so I wrote up some notes.

In brief, I’ll say that you’ve solved the alignment problem if you’ve:

  1. avoided a bad form of AI takeover,
  2. built the dangerous kind of superintelligent AI agents,
  3. gained access to the main benefits of superintelligence, and
  4. become able to elicit some significant portion of those benefits from some of the superintelligent AI agents at stake in (2).[1]

The post also discusses what it would take to do this. In particular:

  • I discuss various options for avoiding bad takeover, notably:
    • Avoiding what I call “vulnerability to alignment” conditions;
    • Ensuring that AIs don’t try to take over;
    • Preventing such attempts from succeeding;
    • Trying to ensure that AI takeover is somehow OK. (The alignment [...]
---

Outline:

(03:46) 1. Avoiding vs. handling vs. solving the problem

(15:32) 2. A framework for thinking about AI safety goals

(19:33) 3. Avoiding bad takeover

(24:03) 3.1 Avoiding vulnerability-to-alignment conditions

(27:18) 3.2 Ensuring that AI systems don’t try to takeover

(32:02) 3.3 Ensuring that takeover efforts don’t succeed

(33:07) 3.4 Ensuring that the takeover in question is somehow OK

(41:55) 3.5 What's the role of “corrigibility” here?

(42:17) 3.5.1 Some definitions of corrigibility

(50:10) 3.5.2 Is corrigibility necessary for “solving alignment”?

(53:34) 3.5.3 Does ensuring corrigibility raise issues that avoiding takeover does not?

(55:46) 4. Desired elicitation

(01:05:17) 5. The role of verification

(01:09:24) 5.1 Output-focused verification and process-focused verification

(01:16:14) 5.2 Does output-focused verification unlock desired elicitation?

(01:23:00) 5.3 What are our options for process-focused verification?

(01:29:25) 6. Does solving the alignment problem require some very sophisticated philosophical achievement re: our values on reflection?

(01:38:05) 7. Wrapping up

The original text contained 27 footnotes which were omitted from this narration.

The original text contained 3 images which were described by AI.

---

First published:
August 24th, 2024

Source:
https://www.lesswrong.com/posts/AFdvSBNgN2EkAsZZA/what-is-it-to-solve-the-alignment-problem-1

---

Narrated by TYPE III AUDIO.

---

Images from the article:

...more
View all episodesView all episodes
Download on the App Store

LessWrong (Curated & Popular)By LessWrong

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

12 ratings


More shows like LessWrong (Curated & Popular)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,386 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,419 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

590 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners

The Good Fight by Yascha Mounk

The Good Fight

904 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

92 Listeners

The Prof G Pod with Scott Galloway by Vox Media Podcast Network

The Prof G Pod with Scott Galloway

5,492 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

76 Listeners

Hard Fork by The New York Times

Hard Fork

5,470 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

130 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

93 Listeners

Statecraft by Santi Ruiz

Statecraft

35 Listeners

The Last Invention by Longview

The Last Invention

703 Listeners