Papers Read on AI

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets


Listen Later

In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of “grokking” a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting.

2022: Alethea Power, Yuri Burda, Harrison Edwards, I. Babuschkin, Vedant Misra

https://arxiv.org/pdf/2201.02177v1.pdf

...more
View all episodesView all episodes
Download on the App Store

Papers Read on AIBy Rob

  • 3.7
  • 3.7
  • 3.7
  • 3.7
  • 3.7

3.7

3 ratings