
Sign up to save your podcasts
Or


Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem.
Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.”
Learn more from The New Stack about inference:
Confronting AI’s Next Big Challenge: Inference Compute
Deep Infra Is Building an AI Inference Cloud for Developers
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
By The New Stack4.3
3131 ratings
Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem.
Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.”
Learn more from The New Stack about inference:
Confronting AI’s Next Big Challenge: Inference Compute
Deep Infra Is Building an AI Inference Cloud for Developers
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

32,246 Listeners

229,674 Listeners

16,174 Listeners

9 Listeners

3 Listeners

273 Listeners

9,724 Listeners

1,105 Listeners

626 Listeners

154 Listeners

4 Listeners

25 Listeners

10,254 Listeners

551 Listeners

5,576 Listeners

15,506 Listeners