
Sign up to save your podcasts
Or


In this paper, the authors propose in-context pretraining for language models, where models are pretrained on sequences of related documents to encourage reasoning across document boundaries. They introduce algorithms for finding related documents and constructing coherent input contexts, and show that in-context pretraining improves performance on various tasks.
https://arxiv.org/abs//2310.10638
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
In this paper, the authors propose in-context pretraining for language models, where models are pretrained on sequences of related documents to encourage reasoning across document boundaries. They introduce algorithms for finding related documents and constructing coherent input contexts, and show that in-context pretraining improves performance on various tasks.
https://arxiv.org/abs//2310.10638
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

953 Listeners

1,971 Listeners

438 Listeners

112,759 Listeners

10,063 Listeners

5,531 Listeners

214 Listeners

51 Listeners

99 Listeners

473 Listeners