AI Illuminated

nGPT: Normalized Transformer with Representation Learning on the Hypersphere


Listen Later

[00:00] Introduction

[00:30] Consistent unit norm normalization in NGPT

[01:08] Mathematical mechanism behind faster convergence

[01:52] Elimination of weight decay in NGPT

[02:21] Role of learnable eigen learning rates in optimization

[03:04] Discussion on training speedup vs. per-step computation time

[03:46] Condition number differences between GPT and NGPT

[04:18] Ablation studies on scaling factors

[04:53] NGPT's relationship to Riemannian optimization

[05:27] Future research

[06:02] Takeaways for practitioners


Authors: Ilya Loshchilov, Cheng-Ping Hsieh, Simeng Sun, Boris Ginsburg

Affiliations: NVIDIA

Abstract: We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.




...more
View all episodesView all episodes
Download on the App Store

AI IlluminatedBy The AI Illuminators