
Sign up to save your podcasts
Or
We should probably try to understand the failure modes of the alignment schemes that AGI developers are most likely to attempt.
I still think Instruction-following AGI is easier and more likely than value aligned AGI. I’ve updated downward on the ease of IF alignment, but upward on how likely it is. IF is the de-facto current primary alignment target (see definition immediately below), and it seems likely to remain so until the first real AGIs, if we continue on the current path (e.g., AI 2027).
If this approach is doomed to fail, best to make that clear well before the first AGIs are launched. If it can work, best to analyze its likely failure points before it is tried.
Definition of IF as an alignment target
What I mean by IF as an alignment target is a developer honestly saying "our first AGI will be safe because it will [...]
---
Outline:
(00:54) Definition of IF as an alignment target
(02:08) Why IF is a likely alignment target for early AGI
(05:16) Strengths of IF as an alignment target
(06:28) Problems with IF as an alignment target
(07:13) Problem 1: You can't fetch the coffee if youre dead - or someone changes your goal.
(09:49) Problem 2: Defining the Principal(s) and jailbreaking
(12:27) Problem 3: Proliferation of human-controlled ASI
(14:10) Problem 4: unpredictable effects of mixed training targets
(16:04) Implications: The Pragmatists Gamble?
---
First published:
Source:
Narrated by TYPE III AUDIO.
We should probably try to understand the failure modes of the alignment schemes that AGI developers are most likely to attempt.
I still think Instruction-following AGI is easier and more likely than value aligned AGI. I’ve updated downward on the ease of IF alignment, but upward on how likely it is. IF is the de-facto current primary alignment target (see definition immediately below), and it seems likely to remain so until the first real AGIs, if we continue on the current path (e.g., AI 2027).
If this approach is doomed to fail, best to make that clear well before the first AGIs are launched. If it can work, best to analyze its likely failure points before it is tried.
Definition of IF as an alignment target
What I mean by IF as an alignment target is a developer honestly saying "our first AGI will be safe because it will [...]
---
Outline:
(00:54) Definition of IF as an alignment target
(02:08) Why IF is a likely alignment target for early AGI
(05:16) Strengths of IF as an alignment target
(06:28) Problems with IF as an alignment target
(07:13) Problem 1: You can't fetch the coffee if youre dead - or someone changes your goal.
(09:49) Problem 2: Defining the Principal(s) and jailbreaking
(12:27) Problem 3: Proliferation of human-controlled ASI
(14:10) Problem 4: unpredictable effects of mixed training targets
(16:04) Implications: The Pragmatists Gamble?
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,379 Listeners
2,397 Listeners
7,777 Listeners
4,104 Listeners
87 Listeners
1,443 Listeners
8,773 Listeners
89 Listeners
356 Listeners
5,374 Listeners
15,059 Listeners
460 Listeners
125 Listeners
64 Listeners
432 Listeners