Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Youtube Video
Recently, I was interviewed by Henry Sleight and Mikita Balesni about how I select alignment research projects. Below is the slightly cleaned up transcript for the YouTube video.
Introductions
Henry Sleight: How about you two introduce yourselves?
Ethan Perez: I'm Ethan. I'm a researcher at Anthropic and do a lot of external collaborations with other people, via the Astra Fellowship and SERI MATS. Currently my team is working on adversarial robustness, and we recently did the sleeper agents paper. So, basically looking at we can use RLHF or adversarial training or current state-of-the-art alignment safety training techniques to train away bad behavior. And we found that in some cases, the answer is no: that they don't train away hidden goals or backdoor behavior and models. That was a lot of my focus in the past [...]
---
Outline:
(00:14) Youtube Video
(00:29) Introductions
(02:04) How Ethan Selects Research Projects
(02:43) Top-down vs Bottom-up Approach
(05:48) Empirical Feedback and Prototyping
(08:01) Sharing Ideas and Getting Feedback
(08:43) Pivoting Projects Based on Promising Results
(10:08) Top-down Approach and Low-Hanging Fruit
(13:32) Lessons Learned when Choosing Research Projects
(13:45) Reading Prior Work
(17:19) Duplication of Work and Collaborations
(18:24) Deciding to Collaborate or Not
(21:05) Advice for Junior Researchers
(23:45) Pivoting projects
(29:13) Red Flags for Projects
(31:16) Open Source Infrastructure Needs
(33:41) Tracking Multiple Promising Research Directions and Switching
(37:31) Visibility into Alternatives
(39:00) Final Takeaway
---