
Sign up to save your podcasts
Or


This paper explores grokking in deep learning, linking delayed generalization to Softmax Collapse and proposing solutions to enable grokking without regularization through new activation functions and training algorithms.
https://arxiv.org/abs//2501.04697
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
This paper explores grokking in deep learning, linking delayed generalization to Softmax Collapse and proposing solutions to enable grokking without regularization through new activation functions and training algorithms.
https://arxiv.org/abs//2501.04697
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

962 Listeners

1,932 Listeners

431 Listeners

112,238 Listeners

9,927 Listeners

5,511 Listeners

214 Listeners

49 Listeners

93 Listeners

466 Listeners