
Sign up to save your podcasts
Or


Summary: Current AI systems possess superhuman memory in two forms, parametric knowledge from training and context windows holding hundreds of pages, yet no pathway connects them. Everything learned in-context vanishes when the conversation ends, a computational form of anterograde amnesia. Recent research suggests weight-based continual learning may be closer than commonly assumed. If these techniques scale, and no other major obstacle emerges, the path to AGI may be shorter than expected, with serious implications for timelines and for technical alignment research that assumes frozen weights.
Intro
Ask researchers what's missing on the path to AGI, and continual learning frequently tops the list. It is the first reason Dwarkesh Patel gave for having longer AGI timelines than many at frontier labs. The ability to learn from experience, to accumulate knowledge over time, is how humans are able to perform virtually all their intellectual feats, and yet current AI systems, for all their impressive capabilities, simply cannot do it.
The Paradox of AI Memory: Superhuman Memory, Twice Over
What makes this puzzling is that large language models already possess memory capabilities far beyond human reach, in two distinct ways.
First, parametric memory: the knowledge encoded in billions of weights during training. [...]
---
Outline:
(00:46) Intro
(01:16) The Paradox of AI Memory: Superhuman Memory, Twice Over
(04:14) The Scaffolding Approach
(06:45) Is This Enough?
(08:14) Weight based continual learning
(08:45) Titans
(14:02) Nested Learning / Hope
(18:13) Experimental Results
(22:51) Near-Term Applications
(26:10) Timelines implications
(27:41) Safety implications
(29:48) Conclusion
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongSummary: Current AI systems possess superhuman memory in two forms, parametric knowledge from training and context windows holding hundreds of pages, yet no pathway connects them. Everything learned in-context vanishes when the conversation ends, a computational form of anterograde amnesia. Recent research suggests weight-based continual learning may be closer than commonly assumed. If these techniques scale, and no other major obstacle emerges, the path to AGI may be shorter than expected, with serious implications for timelines and for technical alignment research that assumes frozen weights.
Intro
Ask researchers what's missing on the path to AGI, and continual learning frequently tops the list. It is the first reason Dwarkesh Patel gave for having longer AGI timelines than many at frontier labs. The ability to learn from experience, to accumulate knowledge over time, is how humans are able to perform virtually all their intellectual feats, and yet current AI systems, for all their impressive capabilities, simply cannot do it.
The Paradox of AI Memory: Superhuman Memory, Twice Over
What makes this puzzling is that large language models already possess memory capabilities far beyond human reach, in two distinct ways.
First, parametric memory: the knowledge encoded in billions of weights during training. [...]
---
Outline:
(00:46) Intro
(01:16) The Paradox of AI Memory: Superhuman Memory, Twice Over
(04:14) The Scaffolding Approach
(06:45) Is This Enough?
(08:14) Weight based continual learning
(08:45) Titans
(14:02) Nested Learning / Hope
(18:13) Experimental Results
(22:51) Near-Term Applications
(26:10) Timelines implications
(27:41) Safety implications
(29:48) Conclusion
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,122 Listeners

132 Listeners

7,266 Listeners

529 Listeners

16,315 Listeners

4 Listeners

14 Listeners

2 Listeners