
Sign up to save your podcasts
Or
What happens when the world's most powerful AI systems are measured by the same yardstick?
In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore’s Law, businesses and governments alike are asking the same question: how do we know what “good” AI performance really looks like? That’s exactly the challenge MLCommons set out to address.
David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.
We explore what’s really driving AI’s explosive growth. It’s not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore’s Law predicted.
But AI’s rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.
We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.
Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.
So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let’s find out.
5
198198 ratings
What happens when the world's most powerful AI systems are measured by the same yardstick?
In this episode of Tech Talks Daily, I spoke with David Kanter, Founder and Executive Director of MLCommons, the organization behind MLPerf, the industry's most recognized benchmark for AI performance. As AI continues to outpace Moore’s Law, businesses and governments alike are asking the same question: how do we know what “good” AI performance really looks like? That’s exactly the challenge MLCommons set out to address.
David shares the story of how a simple suggestion at a Stanford meeting led him from analyst to the architect of a global benchmarking initiative. He explains how MLPerf benchmarks are helping enterprises and policymakers make informed decisions about AI systems, and why transparency, neutrality, and open collaboration are central to the mission.
We explore what’s really driving AI’s explosive growth. It’s not just about chips. Smarter software, algorithmic breakthroughs, and increasingly scalable system designs are all contributing to performance improvements far beyond what Moore’s Law predicted.
But AI’s rapid progress comes with a cost. Power consumption is quickly becoming one of the biggest challenges in the industry. David explains how MLCommons is helping address this with MLPerf Power and why infrastructure innovations like low-precision computation, advanced cooling, and even proximity to power generation are gaining traction.
We also talk about the decision by some major vendors not to participate in MLPerf. David offers perspective on what that means for buyers and why benchmark transparency should be part of any enterprise AI procurement conversation.
Beyond the data center, MLCommons is now benchmarking AI performance on consumer hardware through MLPerf Client and is working on domain-specific efforts such as MLPerf Automotive. As AI shows up in smartphones, vehicles, and smart devices, the need for clear, fair, and relevant performance measurement is only growing.
So how do we measure AI that is everywhere? What should buyers demand from vendors? And how can the industry ensure that AI systems are fast, efficient, and accountable? Let’s find out.
1,641 Listeners
1,272 Listeners
1,022 Listeners
512 Listeners
624 Listeners
173 Listeners
203 Listeners
2,300 Listeners
296 Listeners
323 Listeners
266 Listeners
189 Listeners
5,271 Listeners
343 Listeners
144 Listeners