
Sign up to save your podcasts
Or
TL;DR: Many alignment research proposals often share a common motif: figure out how to enter a basin of alignment / corrigibility for human-level models, and then amplify to more powerful regimes while generalizing gracefully. In this post we lay out a research agenda that comes at this problem from a different direction: if we already have ~human-level systems with extremely robust generalization properties, we should just amplify those directly. We’ll call this strategy “Gradient Descent on the Human Brain”.
Introduction.Put one way, the hard part of the alignment problem is figuring out how to solve ontology identification: mapping between an AI's model of the world and a human's model, in order to translate and specify human goals in an alien ontology.
In generality, in the worst case, this is a pretty difficult problem. But is solving this problem necessary to create safe superintelligences? The assumption that you [...]
---
Outline:
(01:47) The Setup
(03:38) Aside: Whose brains should we use for this?
(04:07) Potential Directions
(04:10) More sophisticated methods
(04:27) Outreach
(04:46) Alternate optimization methods
(05:01) Appendix
(05:13) Toy examples
(06:57) How to run gradient descent on the human brain (longer version)
(09:25) Neural gradient descent: Organoid edition
(13:33) A more advanced sketch
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
TL;DR: Many alignment research proposals often share a common motif: figure out how to enter a basin of alignment / corrigibility for human-level models, and then amplify to more powerful regimes while generalizing gracefully. In this post we lay out a research agenda that comes at this problem from a different direction: if we already have ~human-level systems with extremely robust generalization properties, we should just amplify those directly. We’ll call this strategy “Gradient Descent on the Human Brain”.
Introduction.Put one way, the hard part of the alignment problem is figuring out how to solve ontology identification: mapping between an AI's model of the world and a human's model, in order to translate and specify human goals in an alien ontology.
In generality, in the worst case, this is a pretty difficult problem. But is solving this problem necessary to create safe superintelligences? The assumption that you [...]
---
Outline:
(01:47) The Setup
(03:38) Aside: Whose brains should we use for this?
(04:07) Potential Directions
(04:10) More sophisticated methods
(04:27) Outreach
(04:46) Alternate optimization methods
(05:01) Appendix
(05:13) Toy examples
(06:57) How to run gradient descent on the human brain (longer version)
(09:25) Neural gradient descent: Organoid edition
(13:33) A more advanced sketch
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners