
Sign up to save your podcasts
Or


What happens when the world's most powerful AI systems are measured by the same yardstick?
In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore's Law, businesses and governments alike are asking the same question: how do we know what "good" AI performance really looks like? That's exactly the challenge MLCommons set out to address.
David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.
We explore what's really driving AI's explosive growth. It's not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore's Law predicted.
But AI's rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.
We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.
Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.
So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let's find out.
By Neil C. Hughes5
200200 ratings
What happens when the world's most powerful AI systems are measured by the same yardstick?
In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore's Law, businesses and governments alike are asking the same question: how do we know what "good" AI performance really looks like? That's exactly the challenge MLCommons set out to address.
David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.
We explore what's really driving AI's explosive growth. It's not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore's Law predicted.
But AI's rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.
We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.
Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.
So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let's find out.

1,296 Listeners

536 Listeners

1,649 Listeners

1,105 Listeners

626 Listeners

1,028 Listeners

306 Listeners

343 Listeners

233 Listeners

212 Listeners

512 Listeners

139 Listeners

355 Listeners

69 Listeners

688 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners