
Sign up to save your podcasts
Or
TL;DR: I claim that many reasoning patterns that appear in chains-of-thought are not actually used by the model to come to its answer, and can be more accurately thought of as historical artifacts of training. This can be true even for CoTs that are apparently "faithful" to the true reasons for the model's answer.
Epistemic status: I'm pretty confident that the model described here is more accurate than my previous understanding. However, I wouldn't be very surprised if parts of this post are significantly wrong or misleading. Further experiments would be helpful for validating some of these hypotheses.
Until recently, I assumed that RL training would cause reasoning models to make their chains-of-thought as efficient as possible, so that every token is directly useful to the model. However, I now believe that by default,[1] reasoning models' CoTs will often include many "useless" tokens that don't help the model achieve [...]
---
Outline:
(02:13) RL is dumber than I realized
(03:32) How might vestigial reasoning come about?
(06:28) Experiment: Demonstrating vestigial reasoning
(08:08) Reasoning is reinforced when it correlates with reward
(11:00) One more example: why does process supervision result in longer CoTs?
(15:53) Takeaways
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR: I claim that many reasoning patterns that appear in chains-of-thought are not actually used by the model to come to its answer, and can be more accurately thought of as historical artifacts of training. This can be true even for CoTs that are apparently "faithful" to the true reasons for the model's answer.
Epistemic status: I'm pretty confident that the model described here is more accurate than my previous understanding. However, I wouldn't be very surprised if parts of this post are significantly wrong or misleading. Further experiments would be helpful for validating some of these hypotheses.
Until recently, I assumed that RL training would cause reasoning models to make their chains-of-thought as efficient as possible, so that every token is directly useful to the model. However, I now believe that by default,[1] reasoning models' CoTs will often include many "useless" tokens that don't help the model achieve [...]
---
Outline:
(02:13) RL is dumber than I realized
(03:32) How might vestigial reasoning come about?
(06:28) Experiment: Demonstrating vestigial reasoning
(08:08) Reasoning is reinforced when it correlates with reward
(11:00) One more example: why does process supervision result in longer CoTs?
(15:53) Takeaways
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,333 Listeners
2,380 Listeners
8,018 Listeners
4,129 Listeners
90 Listeners
1,493 Listeners
9,264 Listeners
91 Listeners
425 Listeners
5,446 Listeners
15,488 Listeners
504 Listeners
129 Listeners
72 Listeners
465 Listeners