
Sign up to save your podcasts
Or
This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. I think I could have written a better version of this post with more time. However, my main hope for this post is that people with more expertise use this post as a prompt to write better, more narrow versions for the respective concrete suggestions.
Thanks to Buck Shlegeris, Joe Carlsmith, Samuel Albanie, Max Nadeau, Ethan Perez, James Lucassen, Jan Leike, Dan Lahav, and many others for chats that informed this post.
Many other people have written about automating AI safety work before. The main point I want to make in this post is simply that “Using AI for AI safety work should be a priority today already and isn’t months or years away.” To make this point salient, I try to list a few concrete projects / agendas [...]
---
Outline:
(01:31) We should already think about how to automate AI safety & security work
(03:01) We have to automate some AI safety work eventually
(03:36) Just asking the AI to do alignment research is a bad plan
(05:03) A short, crucial timeframe might be highly influential on the entire trajectory of AI
(05:38) Some things might just take a while to build
(06:48) Gradual increases in capabilities mean different things can be automated at different times
(07:26) The order in which safety techniques are developed might matter a lot
(08:39) High-level comments on preparing for automation
(08:44) Two types of automation
(11:37) Maintaining a lead for defense
(12:49) Build out the safety pipeline as much as possible
(14:36) Prepare research proposals and metrics
(16:50) Build things that scale with compute
(17:36) Specific areas of preparation
(17:53) Evals
(20:32) Red-teaming
(22:24) Monitoring
(24:46) Interpretability
(27:27) Scalable Oversight, Model Organisms & Science of Deep Learning
(29:24) Computer security
(29:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. I think I could have written a better version of this post with more time. However, my main hope for this post is that people with more expertise use this post as a prompt to write better, more narrow versions for the respective concrete suggestions.
Thanks to Buck Shlegeris, Joe Carlsmith, Samuel Albanie, Max Nadeau, Ethan Perez, James Lucassen, Jan Leike, Dan Lahav, and many others for chats that informed this post.
Many other people have written about automating AI safety work before. The main point I want to make in this post is simply that “Using AI for AI safety work should be a priority today already and isn’t months or years away.” To make this point salient, I try to list a few concrete projects / agendas [...]
---
Outline:
(01:31) We should already think about how to automate AI safety & security work
(03:01) We have to automate some AI safety work eventually
(03:36) Just asking the AI to do alignment research is a bad plan
(05:03) A short, crucial timeframe might be highly influential on the entire trajectory of AI
(05:38) Some things might just take a while to build
(06:48) Gradual increases in capabilities mean different things can be automated at different times
(07:26) The order in which safety techniques are developed might matter a lot
(08:39) High-level comments on preparing for automation
(08:44) Two types of automation
(11:37) Maintaining a lead for defense
(12:49) Build out the safety pipeline as much as possible
(14:36) Prepare research proposals and metrics
(16:50) Build things that scale with compute
(17:36) Specific areas of preparation
(17:53) Evals
(20:32) Red-teaming
(22:24) Monitoring
(24:46) Interpretability
(27:27) Scalable Oversight, Model Organisms & Science of Deep Learning
(29:24) Computer security
(29:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,367 Listeners
2,397 Listeners
7,779 Listeners
4,103 Listeners
87 Listeners
1,442 Listeners
8,778 Listeners
89 Listeners
355 Listeners
5,370 Listeners
15,053 Listeners
460 Listeners
126 Listeners
64 Listeners
432 Listeners