
Sign up to save your podcasts
Or


Deeper transformer language models tend to generalize better for compositional tasks, even when the total number of parameters is kept constant. The benefits of depth for generalization cannot be solely attributed to better performance on language modeling or in-distribution data.
https://arxiv.org/abs//2310.19956
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Deeper transformer language models tend to generalize better for compositional tasks, even when the total number of parameters is kept constant. The benefits of depth for generalization cannot be solely attributed to better performance on language modeling or in-distribution data.
https://arxiv.org/abs//2310.19956
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

953 Listeners

1,971 Listeners

438 Listeners

112,700 Listeners

10,063 Listeners

5,531 Listeners

214 Listeners

51 Listeners

99 Listeners

473 Listeners