
Sign up to save your podcasts
Or


This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:
Insofar as you wanted to use AI to make powerful AI safe, it's because you don’t know how to do this task yourself.
So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.
Of course, you have the advantage of grading instead of generating this research. But this advantage [...]
---
Outline:
(06:01) 1. Generalizing to hard tasks
(09:44) 2. Human graders might introduce bias
(11:48) 3. AI agents might still be egregiously misaligned
(12:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:
Insofar as you wanted to use AI to make powerful AI safe, it's because you don’t know how to do this task yourself.
So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.
Of course, you have the advantage of grading instead of generating this research. But this advantage [...]
---
Outline:
(06:01) 1. Generalizing to hard tasks
(09:44) 2. Human graders might introduce bias
(11:48) 3. AI agents might still be egregiously misaligned
(12:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,344 Listeners

2,444 Listeners

9,132 Listeners

4,153 Listeners

92 Listeners

1,597 Listeners

9,901 Listeners

90 Listeners

505 Listeners

5,473 Listeners

16,053 Listeners

540 Listeners

132 Listeners

96 Listeners

517 Listeners