
Sign up to save your podcasts
Or


What if the biggest bottleneck in AI agents wasn't reasoning power, but memory management?
In this episode, we explore a fascinating new framework called MIA, the Memory Intelligence Agent, which reimagines how AI research agents store, compress, and reuse their past experiences. Instead of hoarding every search trace into an ever-growing context window, MIA separates memory into a Manager, a Planner, and an Executor, each with a distinct role. The result: a 7-billion parameter model that outperforms GPT-4o on complex research tasks, and even boosts GPT-5.4 performance by up to 9%. We unpack why "keeping everything" is a trap, and how forgetting strategically might be the real key to smarter AI.
Inspired by the work of Jingyang Qiao, Weicheng Meng, Yu Cheng, and colleagues at East China Normal University, this episode was created using Google's NotebookLM.
Read the original paper here: https://arxiv.org/pdf/2604.04503
By Anlie Arnaudy, Daniel Herbera and Guillaume FournierWhat if the biggest bottleneck in AI agents wasn't reasoning power, but memory management?
In this episode, we explore a fascinating new framework called MIA, the Memory Intelligence Agent, which reimagines how AI research agents store, compress, and reuse their past experiences. Instead of hoarding every search trace into an ever-growing context window, MIA separates memory into a Manager, a Planner, and an Executor, each with a distinct role. The result: a 7-billion parameter model that outperforms GPT-4o on complex research tasks, and even boosts GPT-5.4 performance by up to 9%. We unpack why "keeping everything" is a trap, and how forgetting strategically might be the real key to smarter AI.
Inspired by the work of Jingyang Qiao, Weicheng Meng, Yu Cheng, and colleagues at East China Normal University, this episode was created using Google's NotebookLM.
Read the original paper here: https://arxiv.org/pdf/2604.04503