
Sign up to save your podcasts
Or


The paper proposes that the superior performance of Transformers in deep learning is due to an architectural bias towards mesa-optimization, a learned process within the forward pass. They reverse-engineer Transformers and show that the learned optimization algorithm can be used for few-shot tasks. They also propose a new self-attention layer that improves performance.
By Igor Melnyk5
33 ratings
The paper proposes that the superior performance of Transformers in deep learning is due to an architectural bias towards mesa-optimization, a learned process within the forward pass. They reverse-engineer Transformers and show that the learned optimization algorithm can be used for few-shot tasks. They also propose a new self-attention layer that improves performance.

956 Listeners

1,976 Listeners

438 Listeners

112,847 Listeners

10,064 Listeners

5,532 Listeners

213 Listeners

51 Listeners

98 Listeners

473 Listeners