
Sign up to save your podcasts
Or
This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:
Insofar as you wanted to use AI to make powerful AI safe, it's because you don’t know how to do this task yourself.
So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.
Of course, you have the advantage of grading instead of generating this research. But this advantage [...]
---
Outline:
(06:01) 1. Generalizing to hard tasks
(09:44) 2. Human graders might introduce bias
(11:48) 3. AI agents might still be egregiously misaligned
(12:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:
Insofar as you wanted to use AI to make powerful AI safe, it's because you don’t know how to do this task yourself.
So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.
Of course, you have the advantage of grading instead of generating this research. But this advantage [...]
---
Outline:
(06:01) 1. Generalizing to hard tasks
(09:44) 2. Human graders might introduce bias
(11:48) 3. AI agents might still be egregiously misaligned
(12:28) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,332 Listeners
2,395 Listeners
7,996 Listeners
4,119 Listeners
90 Listeners
1,498 Listeners
9,267 Listeners
91 Listeners
426 Listeners
5,455 Listeners
15,433 Listeners
507 Listeners
125 Listeners
72 Listeners
467 Listeners