
Sign up to save your podcasts
Or


We explore Google's Titans and the MIRAS framework, a new paradigm in sequence modeling that replaces static context compression with active test-time learning. We discuss how Titans utilize deep neural memory modules to update parameters on the fly using a gradient-based "surprise metric," prioritizing unexpected information for long-term storage. We cover the theoretical MIRAS blueprint—which unifies sequence models through attentional bias and retention gates—and introduces robust new architectures like Moneta, Yaad, and Memora. We discuss how these models effectively scale to context windows exceeding 2 million tokens, outperforming GPT-4 and Mamba on complex long-context reasoning tasks.
By Enoch H. KangWe explore Google's Titans and the MIRAS framework, a new paradigm in sequence modeling that replaces static context compression with active test-time learning. We discuss how Titans utilize deep neural memory modules to update parameters on the fly using a gradient-based "surprise metric," prioritizing unexpected information for long-term storage. We cover the theoretical MIRAS blueprint—which unifies sequence models through attentional bias and retention gates—and introduces robust new architectures like Moneta, Yaad, and Memora. We discuss how these models effectively scale to context windows exceeding 2 million tokens, outperforming GPT-4 and Mamba on complex long-context reasoning tasks.