
Sign up to save your podcasts
Or
Scaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-time compute (OpenAI, 2024).
OpenAI's o3 model continues this trend with record-breaking performance, scoring:
According to OpenAI, the bulk of model performance improvement in the o-series of models comes from increasing the length of chain-of-thought (and possibly further techniques like "tree-of-thought") and improving the chain-of-thought [...]
---
Outline:
(00:05) Scaling inference
(02:45) AI safety implications
(03:58) AGI timelines
(04:50) Deployment overhang
(06:06) Chain-of-thought oversight
(07:40) AI security
(09:05) Interpretability
(10:01) More RL?
(11:27) Export controls
(11:54) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Scaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-time compute (OpenAI, 2024).
OpenAI's o3 model continues this trend with record-breaking performance, scoring:
According to OpenAI, the bulk of model performance improvement in the o-series of models comes from increasing the length of chain-of-thought (and possibly further techniques like "tree-of-thought") and improving the chain-of-thought [...]
---
Outline:
(00:05) Scaling inference
(02:45) AI safety implications
(03:58) AGI timelines
(04:50) Deployment overhang
(06:06) Chain-of-thought oversight
(07:40) AI security
(09:05) Interpretability
(10:01) More RL?
(11:27) Export controls
(11:54) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 7 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,312 Listeners
2,404 Listeners
7,904 Listeners
4,115 Listeners
87 Listeners
1,446 Listeners
8,776 Listeners
90 Listeners
355 Listeners
5,374 Listeners
15,298 Listeners
472 Listeners
126 Listeners
73 Listeners
441 Listeners