
Sign up to save your podcasts
Or


In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of “grokking” a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting.
2022: Alethea Power, Yuri Burda, Harrison Edwards, I. Babuschkin, Vedant Misra
https://arxiv.org/pdf/2201.02177v1.pdf
By Rob3.7
33 ratings
In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of “grokking” a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting.
2022: Alethea Power, Yuri Burda, Harrison Edwards, I. Babuschkin, Vedant Misra
https://arxiv.org/pdf/2201.02177v1.pdf