
Sign up to save your podcasts
Or


In this episode, we examine the discovery of scaling laws in neural networks and why they fundamentally reshaped modern AI development. We explain how performance improves predictably—not through clever architectural tricks, but by systematically scaling data, model size, and compute.
We break down how loss behaves as a function of parameters, data, and compute, why these relationships follow power laws, and how this predictability transformed model design from trial-and-error into principled engineering. We also explore the economic, engineering, and societal consequences of scaling—and where its limits may lie.
This episode covers:
• What scaling laws are and why they overturned decades of ML intuition
• Loss as a performance metric and why it matters
• Parameter scaling and diminishing returns
• Data scaling, data-limited vs model-limited regimes
• Optimal balance between model size and dataset size
• Compute scaling and why “better trained” beats “bigger”
• Optimal allocation under a fixed compute budget
• Predicting large-model performance from small experiments
• Why architecture matters less than scale (within limits)
• Scaling beyond language: vision, time series, reinforcement learning
• Inference scaling, pruning, sparsity, and deployment trade-offs
• The limits of single-metric optimization and values pluralism
• Why breaking scaling laws may define the next era of AI
This episode is part of the Adapticx AI Podcast. Listen via the link provided or search “Adapticx” on Apple Podcasts, Spotify, Amazon Music, or most podcast platforms.
Sources and Further Reading
Additional references and extended material are available at:
https://adapticx.co.uk
By Adapticx Technologies LtdIn this episode, we examine the discovery of scaling laws in neural networks and why they fundamentally reshaped modern AI development. We explain how performance improves predictably—not through clever architectural tricks, but by systematically scaling data, model size, and compute.
We break down how loss behaves as a function of parameters, data, and compute, why these relationships follow power laws, and how this predictability transformed model design from trial-and-error into principled engineering. We also explore the economic, engineering, and societal consequences of scaling—and where its limits may lie.
This episode covers:
• What scaling laws are and why they overturned decades of ML intuition
• Loss as a performance metric and why it matters
• Parameter scaling and diminishing returns
• Data scaling, data-limited vs model-limited regimes
• Optimal balance between model size and dataset size
• Compute scaling and why “better trained” beats “bigger”
• Optimal allocation under a fixed compute budget
• Predicting large-model performance from small experiments
• Why architecture matters less than scale (within limits)
• Scaling beyond language: vision, time series, reinforcement learning
• Inference scaling, pruning, sparsity, and deployment trade-offs
• The limits of single-metric optimization and values pluralism
• Why breaking scaling laws may define the next era of AI
This episode is part of the Adapticx AI Podcast. Listen via the link provided or search “Adapticx” on Apple Podcasts, Spotify, Amazon Music, or most podcast platforms.
Sources and Further Reading
Additional references and extended material are available at:
https://adapticx.co.uk