
Sign up to save your podcasts
Or


Analysis of Mixture of Experts (MoE) models' scaling properties introduces a new hyperparameter, granularity, optimizing training configuration for computational efficiency over dense Transformers.
https://arxiv.org/abs//2402.07871
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Analysis of Mixture of Experts (MoE) models' scaling properties introduces a new hyperparameter, granularity, optimizing training configuration for computational efficiency over dense Transformers.
https://arxiv.org/abs//2402.07871
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

962 Listeners

1,985 Listeners

436 Listeners

112,843 Listeners

10,098 Listeners

5,538 Listeners

216 Listeners

51 Listeners

99 Listeners

475 Listeners