
Sign up to save your podcasts
Or


In thinking about the future of AI safety, it sometimes seems helpful to think in terms of “finding a solution to the alignment problem”. At other times, it seems helpful to think in terms of “paying a safety tax”. On the face of it, these two concepts seem hard to reconcile: one of them is about the moment where we have a solution and can stop investing resources into safety; the other presumes something like a constant fraction of resources going into safety, forever.
This is a post for concept geeks. I’ll explain that by allowing safety tax to vary with capability level for a hypothetical dangerous technology, we can represent both of the above dynamics as special cases of a more general class of functions. While I don’t think I’ve got to the point of having these concepts fully crisp, they seem sharp enough that I’ve been finding [...]
---
Outline:
(01:12) Paradigm cases
(01:33) Once-and-done problems
(02:09) Ongoing problems
(02:33) Safety tax functions
(04:02) (Re)parametrizing
(08:57) Safety tax contours
(10:30) Possible extensions to the model
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongIn thinking about the future of AI safety, it sometimes seems helpful to think in terms of “finding a solution to the alignment problem”. At other times, it seems helpful to think in terms of “paying a safety tax”. On the face of it, these two concepts seem hard to reconcile: one of them is about the moment where we have a solution and can stop investing resources into safety; the other presumes something like a constant fraction of resources going into safety, forever.
This is a post for concept geeks. I’ll explain that by allowing safety tax to vary with capability level for a hypothetical dangerous technology, we can represent both of the above dynamics as special cases of a more general class of functions. While I don’t think I’ve got to the point of having these concepts fully crisp, they seem sharp enough that I’ve been finding [...]
---
Outline:
(01:12) Paradigm cases
(01:33) Once-and-done problems
(02:09) Ongoing problems
(02:33) Safety tax functions
(04:02) (Re)parametrizing
(08:57) Safety tax contours
(10:30) Possible extensions to the model
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,858 Listeners

130 Listeners

7,216 Listeners

531 Listeners

16,173 Listeners

4 Listeners

14 Listeners

2 Listeners