
Sign up to save your podcasts
Or


I'm cross-posting my guest post on Epoch's Gradient Updates newsletter, in which I describe some new research from my team at UChicago's XLab — roughly, the algorithmic improvements that most improve capabilities at scale are the ones that require the most compute to find and validate.
This week's issue is a guest post by Henry Josephson, who is a research manager at UChicago's XLab and an AI governance intern at Google DeepMind.
In the AI 2027 scenario, the authors predict a fast takeoff of AI systems recursively self-improving until we have superintelligence in just a few years.
Could this really happen? Whether it's possible may depend on if a software intelligence explosion — a series of rapid algorithmic advances that lead to greater AI capabilities — occurs.
A key crux in the debate about the possibility of a software intelligence explosion comes down to whether key algorithmic improvements scale [...]
---
Outline:
(01:53) Are the best algorithmic improvements compute-dependent?
(07:48) Can Capabilities Advance With Frozen Compute? DeepSeek-V3
(08:55) What This Means for AI Progress
(12:50) Limitations
(14:10) Conclusion
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongI'm cross-posting my guest post on Epoch's Gradient Updates newsletter, in which I describe some new research from my team at UChicago's XLab — roughly, the algorithmic improvements that most improve capabilities at scale are the ones that require the most compute to find and validate.
This week's issue is a guest post by Henry Josephson, who is a research manager at UChicago's XLab and an AI governance intern at Google DeepMind.
In the AI 2027 scenario, the authors predict a fast takeoff of AI systems recursively self-improving until we have superintelligence in just a few years.
Could this really happen? Whether it's possible may depend on if a software intelligence explosion — a series of rapid algorithmic advances that lead to greater AI capabilities — occurs.
A key crux in the debate about the possibility of a software intelligence explosion comes down to whether key algorithmic improvements scale [...]
---
Outline:
(01:53) Are the best algorithmic improvements compute-dependent?
(07:48) Can Capabilities Advance With Frozen Compute? DeepSeek-V3
(08:55) What This Means for AI Progress
(12:50) Limitations
(14:10) Conclusion
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,365 Listeners

2,437 Listeners

9,046 Listeners

4,153 Listeners

92 Listeners

1,595 Listeners

9,911 Listeners

90 Listeners

70 Listeners

5,470 Listeners

16,097 Listeners

536 Listeners

131 Listeners

95 Listeners

522 Listeners