
Sign up to save your podcasts
Or
TL;DR: I claim that many reasoning patterns that appear in chains-of-thought are not actually used by the model to come to its answer, and can be more accurately thought of as historical artifacts of training. This can be true even for CoTs that are apparently "faithful" to the true reasons for the model's answer.
Epistemic status: I'm pretty confident that the model described here is more accurate than my previous understanding. However, I wouldn't be very surprised if parts of this post are significantly wrong or misleading. Further experiments would be helpful for validating some of these hypotheses.
Until recently, I assumed that RL training would cause reasoning models to make their chains-of-thought as efficient as possible, so that every token is directly useful to the model. However, I now believe that by default,[1] reasoning models' CoTs will often include many "useless" tokens that don't help the model achieve [...]
---
Outline:
(02:13) RL is dumber than I realized
(03:32) How might vestigial reasoning come about?
(06:28) Experiment: Demonstrating vestigial reasoning
(08:08) Reasoning is reinforced when it correlates with reward
(11:00) One more example: why does process supervision result in longer CoTs?
(15:53) Takeaways
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR: I claim that many reasoning patterns that appear in chains-of-thought are not actually used by the model to come to its answer, and can be more accurately thought of as historical artifacts of training. This can be true even for CoTs that are apparently "faithful" to the true reasons for the model's answer.
Epistemic status: I'm pretty confident that the model described here is more accurate than my previous understanding. However, I wouldn't be very surprised if parts of this post are significantly wrong or misleading. Further experiments would be helpful for validating some of these hypotheses.
Until recently, I assumed that RL training would cause reasoning models to make their chains-of-thought as efficient as possible, so that every token is directly useful to the model. However, I now believe that by default,[1] reasoning models' CoTs will often include many "useless" tokens that don't help the model achieve [...]
---
Outline:
(02:13) RL is dumber than I realized
(03:32) How might vestigial reasoning come about?
(06:28) Experiment: Demonstrating vestigial reasoning
(08:08) Reasoning is reinforced when it correlates with reward
(11:00) One more example: why does process supervision result in longer CoTs?
(15:53) Takeaways
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,369 Listeners
2,397 Listeners
7,779 Listeners
4,103 Listeners
87 Listeners
1,451 Listeners
8,778 Listeners
89 Listeners
355 Listeners
5,370 Listeners
15,053 Listeners
460 Listeners
126 Listeners
64 Listeners
432 Listeners