
Sign up to save your podcasts
Or
Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.
Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).
Topics we discuss, and timestamps:
- 00:00:40 - Summary of the paper
- 00:02:23 - Alignment as scalable oversight and catastrophe minimization
- 00:08:06 - Novel contribtions
- 00:14:20 - Evaluating adversarial robustness
- 00:20:26 - Adversary construction
- 00:35:14 - The task
- 00:38:23 - Fanfiction
- 00:42:15 - Estimators to reduce labelling burden
- 00:45:39 - Future work
- 00:50:12 - About Redwood Research
The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html
Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ
Research we discuss:
- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663
- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment
- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286
- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472
- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
4.4
88 ratings
Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.
Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).
Topics we discuss, and timestamps:
- 00:00:40 - Summary of the paper
- 00:02:23 - Alignment as scalable oversight and catastrophe minimization
- 00:08:06 - Novel contribtions
- 00:14:20 - Evaluating adversarial robustness
- 00:20:26 - Adversary construction
- 00:35:14 - The task
- 00:38:23 - Fanfiction
- 00:42:15 - Estimators to reduce labelling burden
- 00:45:39 - Future work
- 00:50:12 - About Redwood Research
The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html
Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ
Research we discuss:
- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663
- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment
- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286
- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472
- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
26,377 Listeners
2,397 Listeners
1,779 Listeners
296 Listeners
104 Listeners
4,097 Listeners
87 Listeners
281 Listeners
88 Listeners
354 Listeners
199 Listeners
63 Listeners
64 Listeners
136 Listeners
116 Listeners