Redwood Research Blog

“Clarifying AI R&D threat models” by Josh Clymer


Listen Later

Subtitle: (There are a few).

A casual reader of one of the many AI company safety frameworks might be confused about why “AI R&D” is listed as a “threat model.” They might be even further mystified to find out that some people believe risks from automated AI R&D are the most severe and urgent risks. What's so concerning about AI writing code?

A few months ago, I was frustrated by the lack of a thorough published answer to this question. So when the Safe AI Forum (the organizers of IDAIS) reached out to me about a workshop convening international experts from China, the US, Canada and Europe to discuss these threat models, I was eager to participate.

I ended up being a rather enthusiastic attendee – pulling an all-nighter to throw together the beginnings of a paper that became the output of the workshop. The next day, my [...]

---

Outline:

(02:06) Autonomous AI R&D is dangerous for two reasons that are often conflated

(02:52) Risks from AI R&D automation

(05:04) Risks from AI R&D acceleration

(07:53) Bare minimum recommendations for managing AI R&D risks

(09:06) Conclusion

---

First published:

April 25th, 2025

Source:

https://redwoodresearch.substack.com/p/clarifying-ai-r-and-d-threat-models

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

Redwood Research BlogBy Redwood Research