
Sign up to save your podcasts
Or


The paper addresses the length generalization failure of Transformer-based Large Language Models (LLMs) on long sequences. They propose a solution called "LM-Infinite" that involves an attention mask and a distance limit, which allows LLMs to generate fluent texts and carry out downstream tasks on longer contexts. The solution is computationally efficient and demonstrates consistent fluency and generation quality. The paper does not explicitly state whether sacrificing quality is necessary. The hypothesis about the middle tokens causing the failure is not proven in the abstract.
https://arxiv.org/abs//2308.16137
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper addresses the length generalization failure of Transformer-based Large Language Models (LLMs) on long sequences. They propose a solution called "LM-Infinite" that involves an attention mask and a distance limit, which allows LLMs to generate fluent texts and carry out downstream tasks on longer contexts. The solution is computationally efficient and demonstrates consistent fluency and generation quality. The paper does not explicitly state whether sacrificing quality is necessary. The hypothesis about the middle tokens causing the failure is not proven in the abstract.
https://arxiv.org/abs//2308.16137
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

958 Listeners

1,977 Listeners

438 Listeners

112,858 Listeners

10,073 Listeners

5,535 Listeners

214 Listeners

51 Listeners

98 Listeners

473 Listeners