
Sign up to save your podcasts
Or
How Well do LLMs Compress Their Own Chain-of-Thought? A Token Complexity Approach

- The paper studies reasoning length and model performance tradeoff.
- It explores compression strategies for large language models (LLMs).
- Token complexity measures minimal tokens for successful problem-solving.
- LLMs adapt response length based on problem difficulty.
- Compression improvements require matching token-length to token complexity.
- Shorter prompts can maintain accuracy with reduced response length.
...more