
Sign up to save your podcasts
Or


The paper presents a series of long-context language models (LLMs) that achieve effective context windows of up to 32,768 tokens. The models are built through continual pretraining and achieve consistent improvements on various language modeling tasks and research benchmarks. The paper also analyzes the components and design choices of the models.
https://arxiv.org/abs//2309.16039
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper presents a series of long-context language models (LLMs) that achieve effective context windows of up to 32,768 tokens. The models are built through continual pretraining and achieve consistent improvements on various language modeling tasks and research benchmarks. The paper also analyzes the components and design choices of the models.
https://arxiv.org/abs//2309.16039
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

956 Listeners

1,976 Listeners

438 Listeners

112,847 Listeners

10,064 Listeners

5,532 Listeners

213 Listeners

51 Listeners

98 Listeners

473 Listeners