
Sign up to save your podcasts
Or


This paper critiques single EMA usage in momentum optimizers, proposing AdEMAMix, which combines two EMAs for improved gradient relevance, faster convergence, and reduced model forgetting in training.
https://arxiv.org/abs//2409.03137
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper critiques single EMA usage in momentum optimizers, proposing AdEMAMix, which combines two EMAs for improved gradient relevance, faster convergence, and reduced model forgetting in training.
https://arxiv.org/abs//2409.03137
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

958 Listeners

1,932 Listeners

432 Listeners

112,065 Listeners

9,942 Listeners

5,506 Listeners

209 Listeners

49 Listeners

93 Listeners

467 Listeners