
Sign up to save your podcasts
Or
As many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges are especially exotic and could be prone to irreversible, catastrophic mistakes. E.g., deciding whether and how to do acausal trade.
To deal with these exotic challenges, one meta policy that sounds nice is, “Make sure AIs get ‘wiser’ before doing anything irreversible in highly unfamiliar domains. Then they’ll be less likely to make catastrophic mistakes.” But what kinds of “wisdom” are relevant here? I’ll mostly set aside wisdom of the form “differentially cultivating capabilities that can help address time-sensitive risks”. There's a decent amount of prior work on that.
Instead, I’ll offer my framing on another kind of wisdom: “cultivating clearer views on what it even means for a decision to be a ‘catastrophic mistake’”.[2] Consider this toy dialogue:
AlignedBot (AB) [...]
---
Outline:
(06:45) More details on the motivation for thinking about this stuff
(08:08) Some wisdom concepts
(08:26) Meta-philosophy
(12:09) (Meta-)epistemology
(14:48) Ontology
(16:57) Unawareness
(18:47) Bounded cognition and logical non-omniscience
(21:05) Anthropics
(22:29) Decision theory
(25:10) Normative uncertainty
(26:35) Acknowledgments
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
As many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges are especially exotic and could be prone to irreversible, catastrophic mistakes. E.g., deciding whether and how to do acausal trade.
To deal with these exotic challenges, one meta policy that sounds nice is, “Make sure AIs get ‘wiser’ before doing anything irreversible in highly unfamiliar domains. Then they’ll be less likely to make catastrophic mistakes.” But what kinds of “wisdom” are relevant here? I’ll mostly set aside wisdom of the form “differentially cultivating capabilities that can help address time-sensitive risks”. There's a decent amount of prior work on that.
Instead, I’ll offer my framing on another kind of wisdom: “cultivating clearer views on what it even means for a decision to be a ‘catastrophic mistake’”.[2] Consider this toy dialogue:
AlignedBot (AB) [...]
---
Outline:
(06:45) More details on the motivation for thinking about this stuff
(08:08) Some wisdom concepts
(08:26) Meta-philosophy
(12:09) (Meta-)epistemology
(14:48) Ontology
(16:57) Unawareness
(18:47) Bounded cognition and logical non-omniscience
(21:05) Anthropics
(22:29) Decision theory
(25:10) Normative uncertainty
(26:35) Acknowledgments
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,461 Listeners
15,321 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners