
Sign up to save your podcasts
Or


We should probably try to understand the failure modes of the alignment schemes that AGI developers are most likely to attempt.
I still think Instruction-following AGI is easier and more likely than value aligned AGI. I’ve updated downward on the ease of IF alignment, but upward on how likely it is. IF is the de-facto current primary alignment target (see definition immediately below), and it seems likely to remain so until the first real AGIs, if we continue on the current path (e.g., AI 2027).
If this approach is doomed to fail, best to make that clear well before the first AGIs are launched. If it can work, best to analyze its likely failure points before it is tried.
Definition of IF as an alignment target
What I mean by IF as an alignment target is a developer honestly saying "our first AGI will be safe because it will [...]
---
Outline:
(00:54) Definition of IF as an alignment target
(02:08) Why IF is a likely alignment target for early AGI
(05:16) Strengths of IF as an alignment target
(06:28) Problems with IF as an alignment target
(07:13) Problem 1: You can't fetch the coffee if youre dead - or someone changes your goal.
(09:49) Problem 2: Defining the Principal(s) and jailbreaking
(12:27) Problem 3: Proliferation of human-controlled ASI
(14:10) Problem 4: unpredictable effects of mixed training targets
(16:04) Implications: The Pragmatists Gamble?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongWe should probably try to understand the failure modes of the alignment schemes that AGI developers are most likely to attempt.
I still think Instruction-following AGI is easier and more likely than value aligned AGI. I’ve updated downward on the ease of IF alignment, but upward on how likely it is. IF is the de-facto current primary alignment target (see definition immediately below), and it seems likely to remain so until the first real AGIs, if we continue on the current path (e.g., AI 2027).
If this approach is doomed to fail, best to make that clear well before the first AGIs are launched. If it can work, best to analyze its likely failure points before it is tried.
Definition of IF as an alignment target
What I mean by IF as an alignment target is a developer honestly saying "our first AGI will be safe because it will [...]
---
Outline:
(00:54) Definition of IF as an alignment target
(02:08) Why IF is a likely alignment target for early AGI
(05:16) Strengths of IF as an alignment target
(06:28) Problems with IF as an alignment target
(07:13) Problem 1: You can't fetch the coffee if youre dead - or someone changes your goal.
(09:49) Problem 2: Defining the Principal(s) and jailbreaking
(12:27) Problem 3: Proliferation of human-controlled ASI
(14:10) Problem 4: unpredictable effects of mixed training targets
(16:04) Implications: The Pragmatists Gamble?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,311 Listeners

2,461 Listeners

8,597 Listeners

4,170 Listeners

97 Listeners

1,608 Listeners

10,041 Listeners

97 Listeners

528 Listeners

5,529 Listeners

16,055 Listeners

570 Listeners

138 Listeners

93 Listeners

473 Listeners