
Sign up to save your podcasts
Or


Increasing Transformer model size doesn't always improve performance. A theoretical framework using associative memories and Hopfield networks explains memorization and performance dynamics in transformer-based language models.
https://arxiv.org/abs//2405.08707
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Increasing Transformer model size doesn't always improve performance. A theoretical framework using associative memories and Hopfield networks explains memorization and performance dynamics in transformer-based language models.
https://arxiv.org/abs//2405.08707
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

955 Listeners

1,940 Listeners

437 Listeners

112,049 Listeners

9,958 Listeners

5,511 Listeners

211 Listeners

49 Listeners

91 Listeners

473 Listeners