
Sign up to save your podcasts
Or


The paper explores the impact of parameter sparsity on the scaling behavior of Transformers trained on massive datasets. It identifies a scaling law that describes the relationship between weight sparsity, number of non-zero parameters, and amount of training data. The findings provide insights into the optimal sparsity level for computational efficiency improvements.
https://arxiv.org/abs//2309.08520
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper explores the impact of parameter sparsity on the scaling behavior of Transformers trained on massive datasets. It identifies a scaling law that describes the relationship between weight sparsity, number of non-zero parameters, and amount of training data. The findings provide insights into the optimal sparsity level for computational efficiency improvements.
https://arxiv.org/abs//2309.08520
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

977 Listeners

1,993 Listeners

443 Listeners

113,121 Listeners

10,254 Listeners

5,576 Listeners

221 Listeners

51 Listeners

101 Listeners

475 Listeners