
Sign up to save your podcasts
Or


LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs by 44%—with full code & proofs! Meanwhile OpenAI drops product ads, not papers.
Sponsors
This episode is brought to you by Statistical Horizons
By Francesco Gadaleta4.2
7272 ratings
LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs by 44%—with full code & proofs! Meanwhile OpenAI drops product ads, not papers.
Sponsors
This episode is brought to you by Statistical Horizons

31,971 Listeners

7,575 Listeners

1,706 Listeners

1,091 Listeners

623 Listeners

585 Listeners

823 Listeners

301 Listeners

99 Listeners

9,161 Listeners

207 Listeners

306 Listeners

5,512 Listeners

228 Listeners

1,104 Listeners