
Sign up to save your podcasts
Or


Explore the synergy between long context models and Retrieval Augmented Generation (RAG) in this episode of Release Notes. Join Google DeepMind's Nikolay Savinov as he discusses the importance of large context windows, how they enable Al agents, and what's next in the field.
Chapters:
0:52 Introduction & defining tokens
5:27 Context window importance
9:53 RAG vs. Long Context
14:19 Scaling beyond 2 million tokens
18:41 Long context improvements since 1.5 Pro release
23:26 Difficulty of attending to the whole context
28:37 Evaluating long context: beyond needle-in-a-haystack
33:41 Integrating long context research
34:57 Reasoning and long outputs
40:54 Tips for using long context
48:51 The future of long context: near-perfect recall and cost reduction
54:42 The role of infrastructure
56:15 Long-context and agents
By Google AI5
77 ratings
Explore the synergy between long context models and Retrieval Augmented Generation (RAG) in this episode of Release Notes. Join Google DeepMind's Nikolay Savinov as he discusses the importance of large context windows, how they enable Al agents, and what's next in the field.
Chapters:
0:52 Introduction & defining tokens
5:27 Context window importance
9:53 RAG vs. Long Context
14:19 Scaling beyond 2 million tokens
18:41 Long context improvements since 1.5 Pro release
23:26 Difficulty of attending to the whole context
28:37 Evaluating long context: beyond needle-in-a-haystack
33:41 Integrating long context research
34:57 Reasoning and long outputs
40:54 Tips for using long context
48:51 The future of long context: near-perfect recall and cost reduction
54:42 The role of infrastructure
56:15 Long-context and agents

1,993 Listeners

1,105 Listeners

233 Listeners

8,876 Listeners

9,722 Listeners

203 Listeners

10,254 Listeners

5,576 Listeners

3,333 Listeners

16,525 Listeners

214 Listeners

688 Listeners

80 Listeners

112 Listeners

34 Listeners