
Sign up to save your podcasts
Or


In this episode of State of AI, we dissect one of the most provocative new findings in AI research — Scaling Laws Are Unreliable for Downstream Tasks by Nicholas Lourie, Michael Y. Hu, and Kyunghyun Cho of NYU. This study delivers a reality check to one of deep learning’s core assumptions: that increasing model size, data, and compute always leads to better downstream performance.
The paper’s meta-analysis across 46 tasks reveals that predictable, linear scaling occurs only 39% of the time — meaning the majority of tasks show irregular, noisy, or even inverse scaling, where larger models perform worse.
We explore:
⚖️ Why downstream scaling laws often break, even when pretraining scales perfectly.
🧩 How dataset choice, validation corpus, and task formulation can flip scaling trends.
🔄 Why some models show “breakthrough scaling” — sudden jumps in capability after long plateaus.
🧠 What this means for the future of AI forecasting, model evaluation, and cost-efficient research.
🧪 The implications for reproducibility and why scaling may be investigator-specific.
If you’ve ever heard “just make it bigger” as the answer to AI progress — this episode will challenge that belief.
📊 Keywords: AI scaling laws, NYU AI research, Kyunghyun Cho, deep learning limits, downstream tasks, inverse scaling, emergent abilities, AI reproducibility, model evaluation, State of AI podcast.
By Ali MehediIn this episode of State of AI, we dissect one of the most provocative new findings in AI research — Scaling Laws Are Unreliable for Downstream Tasks by Nicholas Lourie, Michael Y. Hu, and Kyunghyun Cho of NYU. This study delivers a reality check to one of deep learning’s core assumptions: that increasing model size, data, and compute always leads to better downstream performance.
The paper’s meta-analysis across 46 tasks reveals that predictable, linear scaling occurs only 39% of the time — meaning the majority of tasks show irregular, noisy, or even inverse scaling, where larger models perform worse.
We explore:
⚖️ Why downstream scaling laws often break, even when pretraining scales perfectly.
🧩 How dataset choice, validation corpus, and task formulation can flip scaling trends.
🔄 Why some models show “breakthrough scaling” — sudden jumps in capability after long plateaus.
🧠 What this means for the future of AI forecasting, model evaluation, and cost-efficient research.
🧪 The implications for reproducibility and why scaling may be investigator-specific.
If you’ve ever heard “just make it bigger” as the answer to AI progress — this episode will challenge that belief.
📊 Keywords: AI scaling laws, NYU AI research, Kyunghyun Cho, deep learning limits, downstream tasks, inverse scaling, emergent abilities, AI reproducibility, model evaluation, State of AI podcast.