
Sign up to save your podcasts
Or


As many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges are especially exotic and could be prone to irreversible, catastrophic mistakes. E.g., deciding whether and how to do acausal trade.
To deal with these exotic challenges, one meta policy that sounds nice is, “Make sure AIs get ‘wiser’ before doing anything irreversible in highly unfamiliar domains. Then they’ll be less likely to make catastrophic mistakes.” But what kinds of “wisdom” are relevant here? I’ll mostly set aside wisdom of the form “differentially cultivating capabilities that can help address time-sensitive risks”. There's a decent amount of prior work on that.
Instead, I’ll offer my framing on another kind of wisdom: “cultivating clearer views on what it even means for a decision to be a ‘catastrophic mistake’”.[2] Consider this toy dialogue:
AlignedBot (AB) [...]
---
Outline:
(06:45) More details on the motivation for thinking about this stuff
(08:08) Some wisdom concepts
(08:26) Meta-philosophy
(12:09) (Meta-)epistemology
(14:48) Ontology
(16:57) Unawareness
(18:47) Bounded cognition and logical non-omniscience
(21:05) Anthropics
(22:29) Decision theory
(25:10) Normative uncertainty
(26:35) Acknowledgments
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongAs many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges are especially exotic and could be prone to irreversible, catastrophic mistakes. E.g., deciding whether and how to do acausal trade.
To deal with these exotic challenges, one meta policy that sounds nice is, “Make sure AIs get ‘wiser’ before doing anything irreversible in highly unfamiliar domains. Then they’ll be less likely to make catastrophic mistakes.” But what kinds of “wisdom” are relevant here? I’ll mostly set aside wisdom of the form “differentially cultivating capabilities that can help address time-sensitive risks”. There's a decent amount of prior work on that.
Instead, I’ll offer my framing on another kind of wisdom: “cultivating clearer views on what it even means for a decision to be a ‘catastrophic mistake’”.[2] Consider this toy dialogue:
AlignedBot (AB) [...]
---
Outline:
(06:45) More details on the motivation for thinking about this stuff
(08:08) Some wisdom concepts
(08:26) Meta-philosophy
(12:09) (Meta-)epistemology
(14:48) Ontology
(16:57) Unawareness
(18:47) Bounded cognition and logical non-omniscience
(21:05) Anthropics
(22:29) Decision theory
(25:10) Normative uncertainty
(26:35) Acknowledgments
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,369 Listeners

2,425 Listeners

8,947 Listeners

4,149 Listeners

92 Listeners

1,590 Listeners

9,918 Listeners

90 Listeners

74 Listeners

5,470 Listeners

16,085 Listeners

536 Listeners

130 Listeners

94 Listeners

507 Listeners