
Sign up to save your podcasts
Or


The paper proposes a method for accelerating large-scale pre-training by using model-based data selection policies. The method reduces computation needed for training while still achieving the same performance as models trained with uniform sampling. The approach is shown to be effective across datasets and tasks, and also improves performance in multimodal transfer tasks and pretraining regimes.
https://arxiv.org/abs//2312.05328
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper proposes a method for accelerating large-scale pre-training by using model-based data selection policies. The method reduces computation needed for training while still achieving the same performance as models trained with uniform sampling. The approach is shown to be effective across datasets and tasks, and also improves performance in multimodal transfer tasks and pretraining regimes.
https://arxiv.org/abs//2312.05328
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

954 Listeners

1,971 Listeners

438 Listeners

112,664 Listeners

10,051 Listeners

5,531 Listeners

214 Listeners

51 Listeners

93 Listeners

473 Listeners