
Sign up to save your podcasts
Or


MLOps Coffee Sessions #76 with Mohamed Elgendy, Build a Culture of ML Testing and Model Quality.
Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter
// Abstract
Machine learning engineers and data scientists spend most of their time testing and validating their models’ performance. But as machine learning products become more integral to our daily lives, the importance of rigorously testing model behavior will only increase.
Current ML evaluation techniques are falling short in their attempts to describe the full picture of model performance. Evaluating ML models by only using global metrics (like accuracy or F1 score) produces a low-resolution picture of a model’s performance and fails to describe the model's performance across types of cases, attributes, and scenarios.
It is rapidly becoming vital for ML teams to have a full understanding of when and how their models fail and to track these cases across different model versions to be able to identify regression. We’ve seen great results from teams implementing unit and functional testing techniques in their model testing. In this talk, we’ll cover why systematic unit testing is important and how to effectively test ML system behavior.
// Bio
Mohamed is the Co-founder & CEO of Kolena and the author of the book “Deep Learning for Vision Systems”. Previously, he built and managed AI/ML organizations at Amazon, Twilio, Rakuten, and Synapse. Mohamed regularly speaks at AI conferences like Amazon's DevCon, O'Reilly's AI conference, and Google's I/O.
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/
Connect with Mohamed on LinkedIn: https://www.linkedin.com/in/moelgendy/
Timestamps:
[00:00] Takeways
[04:41] Why do ML Testing?
[08:41] Kolena's main goal
[09:41] Difference of ML Testing from others
[13:12] Importance of a knowledge base in the organization
[17:53] Computational cost issues from testing
[20:48] Convincing people to do more testing
[23:13] Testing resources recommendations
[25:15] How to get good at testing
[28:19] Dealing with ML regulations
[30:57] Identifying failure modes
[38:57] Test-centric development for production ML
[40:53] Identifying scenarios
[43:37] Computer vision samples in structured data
[46:10] "Deep Learning for Vision Systems" by Mohamed Elgendy
[49:36] Wrap up
By Demetrios4.6
2323 ratings
MLOps Coffee Sessions #76 with Mohamed Elgendy, Build a Culture of ML Testing and Model Quality.
Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter
// Abstract
Machine learning engineers and data scientists spend most of their time testing and validating their models’ performance. But as machine learning products become more integral to our daily lives, the importance of rigorously testing model behavior will only increase.
Current ML evaluation techniques are falling short in their attempts to describe the full picture of model performance. Evaluating ML models by only using global metrics (like accuracy or F1 score) produces a low-resolution picture of a model’s performance and fails to describe the model's performance across types of cases, attributes, and scenarios.
It is rapidly becoming vital for ML teams to have a full understanding of when and how their models fail and to track these cases across different model versions to be able to identify regression. We’ve seen great results from teams implementing unit and functional testing techniques in their model testing. In this talk, we’ll cover why systematic unit testing is important and how to effectively test ML system behavior.
// Bio
Mohamed is the Co-founder & CEO of Kolena and the author of the book “Deep Learning for Vision Systems”. Previously, he built and managed AI/ML organizations at Amazon, Twilio, Rakuten, and Synapse. Mohamed regularly speaks at AI conferences like Amazon's DevCon, O'Reilly's AI conference, and Google's I/O.
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/
Connect with Mohamed on LinkedIn: https://www.linkedin.com/in/moelgendy/
Timestamps:
[00:00] Takeways
[04:41] Why do ML Testing?
[08:41] Kolena's main goal
[09:41] Difference of ML Testing from others
[13:12] Importance of a knowledge base in the organization
[17:53] Computational cost issues from testing
[20:48] Convincing people to do more testing
[23:13] Testing resources recommendations
[25:15] How to get good at testing
[28:19] Dealing with ML regulations
[30:57] Identifying failure modes
[38:57] Test-centric development for production ML
[40:53] Identifying scenarios
[43:37] Computer vision samples in structured data
[46:10] "Deep Learning for Vision Systems" by Mohamed Elgendy
[49:36] Wrap up

1,095 Listeners

625 Listeners

303 Listeners

347 Listeners

146 Listeners

224 Listeners

205 Listeners

97 Listeners

525 Listeners

133 Listeners

228 Listeners

34 Listeners

22 Listeners

40 Listeners

71 Listeners