
Sign up to save your podcasts
Or


There is some concern that training AI systems on content predicting AI misalignment will hyperstition AI systems into misalignment. This has been discussed previously by a lot of people: Anna Salamon, Alex Turner, the AI Futures Project, Miles Kodama, Gwern, Cleo Nardo, Richard Ngo, Rational Animations, Mark Keavney and others.
In this post, I analyze whether AI developers should filter out discussion of AI misalignment from training data. I discuss several details that I don't think have been adequately covered by previous work:
My evaluation of this proposal is that while there are some legitimate reasons to think that this filtering will end up being harmful, it seems to decrease risk meaningfully in expectation. So, I [...]
---
Outline:
(02:15) What data to filter
(04:13) Why filtering AI villain data reduces risk from misaligned AI
(07:25) Downsides of filtering AI villain data
(10:30) Finer details of filtering AI villain data
(10:50) How to make filtering and having AI do/aid safety work compatible?
(13:50) What model should the lab externally deploy?
(16:51) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThere is some concern that training AI systems on content predicting AI misalignment will hyperstition AI systems into misalignment. This has been discussed previously by a lot of people: Anna Salamon, Alex Turner, the AI Futures Project, Miles Kodama, Gwern, Cleo Nardo, Richard Ngo, Rational Animations, Mark Keavney and others.
In this post, I analyze whether AI developers should filter out discussion of AI misalignment from training data. I discuss several details that I don't think have been adequately covered by previous work:
My evaluation of this proposal is that while there are some legitimate reasons to think that this filtering will end up being harmful, it seems to decrease risk meaningfully in expectation. So, I [...]
---
Outline:
(02:15) What data to filter
(04:13) Why filtering AI villain data reduces risk from misaligned AI
(07:25) Downsides of filtering AI villain data
(10:30) Finer details of filtering AI villain data
(10:50) How to make filtering and having AI do/aid safety work compatible?
(13:50) What model should the lab externally deploy?
(16:51) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,320 Listeners

2,451 Listeners

8,549 Listeners

4,178 Listeners

93 Listeners

1,601 Listeners

9,922 Listeners

95 Listeners

512 Listeners

5,507 Listeners

15,930 Listeners

547 Listeners

130 Listeners

93 Listeners

467 Listeners