
Sign up to save your podcasts
Or
It's currently possible to (mostly or fully) cheaply reproduce the performance of a model by training another (initially weaker) model to imitate the stronger model's outputs.[1] I'll refer to this as distillation. In the case of RL, distilling the learned capabilities is much, much cheaper than the RL itself (especially if you are distilling back into the original base model). But even for pre-training, distilling is cheaper than the original training.[2]
In this post, I'll discuss how we could utilize distillation to potentially remove (or possibly detect) misalignment. I'll also discuss a few other applications.[3] My overall take is that techniques utilizing distillation are mildly to moderately promising and the low cost of distillation might make them surprisingly viable, but it's quite tricky to reason about how effective these techniques are.
Distilling to remove misalignment
I'll assume that we have a powerful model[4] that we're worried [...]
---
Outline:
(01:01) Distilling to remove misalignment
(11:16) Detecting problematic actions using (distillation) training
(15:01) How does distillation interact with neuralese?
(15:57) Other techniques leveraging distillation
(16:01) Distillation as a (non-mechanistic) interpretability technique
(17:13) Distillation for precise capability and knowledge control
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
It's currently possible to (mostly or fully) cheaply reproduce the performance of a model by training another (initially weaker) model to imitate the stronger model's outputs.[1] I'll refer to this as distillation. In the case of RL, distilling the learned capabilities is much, much cheaper than the RL itself (especially if you are distilling back into the original base model). But even for pre-training, distilling is cheaper than the original training.[2]
In this post, I'll discuss how we could utilize distillation to potentially remove (or possibly detect) misalignment. I'll also discuss a few other applications.[3] My overall take is that techniques utilizing distillation are mildly to moderately promising and the low cost of distillation might make them surprisingly viable, but it's quite tricky to reason about how effective these techniques are.
Distilling to remove misalignment
I'll assume that we have a powerful model[4] that we're worried [...]
---
Outline:
(01:01) Distilling to remove misalignment
(11:16) Detecting problematic actions using (distillation) training
(15:01) How does distillation interact with neuralese?
(15:57) Other techniques leveraging distillation
(16:01) Distillation as a (non-mechanistic) interpretability technique
(17:13) Distillation for precise capability and knowledge control
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,461 Listeners
15,321 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners