
Sign up to save your podcasts
Or


Epistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski.
Motivation and Overview
Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...]
---
Outline:
(00:34) Motivation and Overview
(02:39) Definitions and Claims
(09:50) Analysis
(11:03) Prosaic counterexamples
(13:23) Exotic Counterexamples
(15:07) Risks and Implementation
(23:22) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski.
Motivation and Overview
Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...]
---
Outline:
(00:34) Motivation and Overview
(02:39) Definitions and Claims
(09:50) Analysis
(11:03) Prosaic counterexamples
(13:23) Exotic Counterexamples
(15:07) Risks and Implementation
(23:22) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,383 Listeners

2,426 Listeners

8,233 Listeners

4,147 Listeners

92 Listeners

1,561 Listeners

9,829 Listeners

89 Listeners

489 Listeners

5,479 Listeners

16,097 Listeners

532 Listeners

133 Listeners

97 Listeners

509 Listeners