
Sign up to save your podcasts
Or


Scaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-time compute (OpenAI, 2024).
OpenAI's o3 model continues this trend with record-breaking performance, scoring:
According to OpenAI, the bulk of model performance improvement in the o-series of models comes from increasing the length of chain-of-thought (and possibly further techniques like "tree-of-thought") and improving the chain-of-thought [...]
---
Outline:
(00:05) Scaling inference
(02:45) AI safety implications
(03:58) AGI timelines
(04:50) Deployment overhang
(06:06) Chain-of-thought oversight
(07:40) AI security
(09:05) Interpretability
(10:01) More RL?
(11:27) Export controls
(11:54) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongScaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-time compute (OpenAI, 2024).
OpenAI's o3 model continues this trend with record-breaking performance, scoring:
According to OpenAI, the bulk of model performance improvement in the o-series of models comes from increasing the length of chain-of-thought (and possibly further techniques like "tree-of-thought") and improving the chain-of-thought [...]
---
Outline:
(00:05) Scaling inference
(02:45) AI safety implications
(03:58) AGI timelines
(04:50) Deployment overhang
(06:06) Chain-of-thought oversight
(07:40) AI security
(09:05) Interpretability
(10:01) More RL?
(11:27) Export controls
(11:54) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,368 Listeners

132 Listeners

7,255 Listeners

566 Listeners

16,487 Listeners

4 Listeners

14 Listeners

2 Listeners