
Sign up to save your podcasts
Or


Epistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski.
Motivation and Overview
Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...]
---
Outline:
(00:34) Motivation and Overview
(02:39) Definitions and Claims
(09:50) Analysis
(11:03) Prosaic counterexamples
(13:23) Exotic Counterexamples
(15:07) Risks and Implementation
(23:22) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski.
Motivation and Overview
Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...]
---
Outline:
(00:34) Motivation and Overview
(02:39) Definitions and Claims
(09:50) Analysis
(11:03) Prosaic counterexamples
(13:23) Exotic Counterexamples
(15:07) Risks and Implementation
(23:22) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,319 Listeners

2,452 Listeners

8,529 Listeners

4,176 Listeners

93 Listeners

1,601 Listeners

9,936 Listeners

95 Listeners

517 Listeners

5,509 Listeners

15,918 Listeners

552 Listeners

131 Listeners

93 Listeners

466 Listeners