
Sign up to save your podcasts
Or


Investigating if small language models can improve large models by pruning datasets based on perplexity, leading to enhanced downstream task performance and reduced pretraining steps.
https://arxiv.org/abs//2405.20541
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Investigating if small language models can improve large models by pruning datasets based on perplexity, leading to enhanced downstream task performance and reduced pretraining steps.
https://arxiv.org/abs//2405.20541
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

955 Listeners

1,933 Listeners

437 Listeners

112,037 Listeners

9,955 Listeners

5,506 Listeners

212 Listeners

49 Listeners

91 Listeners

472 Listeners