
Sign up to save your podcasts
Or


This paper compares retrieval-augmentation and long context window methods for improving the performance of large language models (LLMs) on downstream tasks. The study finds that retrieval-augmentation with a 4K context window can achieve comparable performance to a finetuned LLM with a 16K context window, while requiring less computation. Retrieval also significantly improves LLM performance regardless of context window size. The best model, a retrieval-augmented LLM with a 32K context window, outperforms other models on long context tasks.
https://arxiv.org/abs//2310.03025
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper compares retrieval-augmentation and long context window methods for improving the performance of large language models (LLMs) on downstream tasks. The study finds that retrieval-augmentation with a 4K context window can achieve comparable performance to a finetuned LLM with a 16K context window, while requiring less computation. Retrieval also significantly improves LLM performance regardless of context window size. The best model, a retrieval-augmented LLM with a 32K context window, outperforms other models on long context tasks.
https://arxiv.org/abs//2310.03025
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

956 Listeners

1,976 Listeners

438 Listeners

112,847 Listeners

10,064 Listeners

5,532 Listeners

213 Listeners

51 Listeners

98 Listeners

473 Listeners