
Sign up to save your podcasts
Or
Much of the scientific process involves searching. But rather than continue to rely on the luck of discovery, Google DeepMind has engineered a more efficient AI agent that mines complex spaces to facilitate scientific breakthroughs. Sarah Guo speaks with Pushmeet Kohli, VP of Science and Strategic Initiatives, and research scientist Matej Balog at Google DeepMind about AlphaEvolve, an autonomous coding agent they developed that finds new algorithms through evolutionary search. Pushmeet and Matej talk about how AlphaEvolve tackles the problem of matrix multiplication efficiency, scaling and iteration in problem solving, and whether or not this means we are at self-improving AI. Together, they also explore the implications AlphaEvolve has to other sciences beyond mathematics and computer science.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @pushmeet | @matejbalog
Chapters:
00:00 Pushmeet Kohli and Matej Balog Introduction
0:48 Origin of AlphaEvolve
02:31 AlphaEvolve’s Progression from AlphaGo and AlphaTensor
08:02 The Open Problem of Matrix Multiplication Efficiency
11:18 How AlphaEvolve Evolves Code
14:43 Scaling and Predicting Iterations
16:52 Implications for Coding Agents
19:42 Overcoming Limits of Automated Evaluators
25:21 Are We At Self-Improving AI?
28:10 Effects on Scientific Discovery and Mathematics
31:50 Role of Human Scientists with AlphaEvolve
38:30 Making AlphaEvolve Broadly Accessible
40:18 Applying AlphaEvolve Within Google
41:39 Conclusion
4.4
114114 ratings
Much of the scientific process involves searching. But rather than continue to rely on the luck of discovery, Google DeepMind has engineered a more efficient AI agent that mines complex spaces to facilitate scientific breakthroughs. Sarah Guo speaks with Pushmeet Kohli, VP of Science and Strategic Initiatives, and research scientist Matej Balog at Google DeepMind about AlphaEvolve, an autonomous coding agent they developed that finds new algorithms through evolutionary search. Pushmeet and Matej talk about how AlphaEvolve tackles the problem of matrix multiplication efficiency, scaling and iteration in problem solving, and whether or not this means we are at self-improving AI. Together, they also explore the implications AlphaEvolve has to other sciences beyond mathematics and computer science.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @pushmeet | @matejbalog
Chapters:
00:00 Pushmeet Kohli and Matej Balog Introduction
0:48 Origin of AlphaEvolve
02:31 AlphaEvolve’s Progression from AlphaGo and AlphaTensor
08:02 The Open Problem of Matrix Multiplication Efficiency
11:18 How AlphaEvolve Evolves Code
14:43 Scaling and Predicting Iterations
16:52 Implications for Coding Agents
19:42 Overcoming Limits of Automated Evaluators
25:21 Are We At Self-Improving AI?
28:10 Effects on Scientific Discovery and Mathematics
31:50 Role of Human Scientists with AlphaEvolve
38:30 Making AlphaEvolve Broadly Accessible
40:18 Applying AlphaEvolve Within Google
41:39 Conclusion
1,273 Listeners
1,030 Listeners
517 Listeners
441 Listeners
192 Listeners
87 Listeners
389 Listeners
39 Listeners
75 Listeners
135 Listeners
459 Listeners
29 Listeners
22 Listeners
42 Listeners
17 Listeners