Best AI papers explained

Beyond the Transformer: Titans, MIRAS, and the Future of Infinite Context


Listen Later

We explore Google's Titans and the MIRAS framework, a new paradigm in sequence modeling that replaces static context compression with active test-time learning. We discuss how Titans utilize deep neural memory modules to update parameters on the fly using a gradient-based "surprise metric," prioritizing unexpected information for long-term storage. We cover the theoretical MIRAS blueprint—which unifies sequence models through attentional bias and retention gates—and introduces robust new architectures like Moneta, Yaad, and Memora. We discuss how these models effectively scale to context windows exceeding 2 million tokens, outperforming GPT-4 and Mamba on complex long-context reasoning tasks.

...more
View all episodesView all episodes
Download on the App Store

Best AI papers explainedBy Enoch H. Kang