
Sign up to save your podcasts
Or
Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons.
In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference.
We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.
The complete show notes page for this episode can be found at twimlai.com/go/434.
4.7
416416 ratings
Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons.
In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference.
We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.
The complete show notes page for this episode can be found at twimlai.com/go/434.
1,060 Listeners
475 Listeners
296 Listeners
341 Listeners
149 Listeners
187 Listeners
298 Listeners
90 Listeners
426 Listeners
125 Listeners
200 Listeners
71 Listeners
508 Listeners
32 Listeners
43 Listeners