
Sign up to save your podcasts
Or


Deeper transformer language models tend to generalize better for compositional tasks, even when the total number of parameters is kept constant. The benefits of depth for generalization cannot be solely attributed to better performance on language modeling or in-distribution data.
https://arxiv.org/abs//2310.19956
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Deeper transformer language models tend to generalize better for compositional tasks, even when the total number of parameters is kept constant. The benefits of depth for generalization cannot be solely attributed to better performance on language modeling or in-distribution data.
https://arxiv.org/abs//2310.19956
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

977 Listeners

1,993 Listeners

443 Listeners

113,121 Listeners

10,254 Listeners

5,576 Listeners

221 Listeners

51 Listeners

101 Listeners

475 Listeners