
Sign up to save your podcasts
Or


One of my favorite AI papers is “Lets Think Dot By Dot”, which finds that LLMs can use meaningless filler tokens (like “.”) to improve their performance, but I was overestimating the implications until recently[1] and I think other people might be too.
The paper finds that LLMs can be trained to use filler tokens to increase their ability to do parallel reasoning tasks[2]. This has been compared to chain of thought, but CoT allows models to increase sequential reasoning, which is more powerful[3]. I now think this paper should be taken as evidence against LLMs ability to perform long-term reasoning[4] in secret[5].
This means that if a problem can be broken down into sub-problems, but the model isn’t wide enough to process it in one pass, the model can instead parallelize across multiple filler token positions and then combine the results. However, if the problem requires step-by-step thinking and the model isn’t deep enough, filler tokens don’t help. In comparison, Chain of Thought helps in both situations.
My metaphor for this is that filler tokens allow a model to dynamically increase the size of layers, but CoT allows the model to dynamically add layers.
The problem
Every layer [...]
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongOne of my favorite AI papers is “Lets Think Dot By Dot”, which finds that LLMs can use meaningless filler tokens (like “.”) to improve their performance, but I was overestimating the implications until recently[1] and I think other people might be too.
The paper finds that LLMs can be trained to use filler tokens to increase their ability to do parallel reasoning tasks[2]. This has been compared to chain of thought, but CoT allows models to increase sequential reasoning, which is more powerful[3]. I now think this paper should be taken as evidence against LLMs ability to perform long-term reasoning[4] in secret[5].
This means that if a problem can be broken down into sub-problems, but the model isn’t wide enough to process it in one pass, the model can instead parallelize across multiple filler token positions and then combine the results. However, if the problem requires step-by-step thinking and the model isn’t deep enough, filler tokens don’t help. In comparison, Chain of Thought helps in both situations.
My metaphor for this is that filler tokens allow a model to dynamically increase the size of layers, but CoT allows the model to dynamically add layers.
The problem
Every layer [...]
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,936 Listeners

132 Listeners

7,283 Listeners

541 Listeners

16,372 Listeners

4 Listeners

14 Listeners

2 Listeners