
Sign up to save your podcasts
Or


This paper introduces a scheduling strategy for pipeline parallelism in distributed training that achieves zero pipeline bubbles, resulting in improved performance compared to baseline methods. The authors also develop an algorithm to automatically find optimal schedules and introduce a technique to bypass synchronizations during the optimizer step. Experimental evaluations show significant improvements in throughput. The implementation is open sourced.
https://arxiv.org/abs//2401.10241
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper introduces a scheduling strategy for pipeline parallelism in distributed training that achieves zero pipeline bubbles, resulting in improved performance compared to baseline methods. The authors also develop an algorithm to automatically find optimal schedules and introduce a technique to bypass synchronizations during the optimizer step. Experimental evaluations show significant improvements in throughput. The implementation is open sourced.
https://arxiv.org/abs//2401.10241
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

951 Listeners

1,964 Listeners

439 Listeners

112,586 Listeners

10,043 Listeners

5,531 Listeners

214 Listeners

51 Listeners

93 Listeners

473 Listeners