AI Engineering Podcast

Build Better Machine Learning Models With Confidence By Adding Validation With Deepchecks


Listen Later

Summary
Machine learning has the potential to transform industries and revolutionize business capabilities, but only if the models are reliable and robust. Because of the fundamental probabilistic nature of machine learning techniques it can be challenging to test and validate the generated models. The team at Deepchecks understands the widespread need to easily and repeatably check and verify the outputs of machine learning models and the complexity involved in making it a reality. In this episode Shir Chorev and Philip Tannor explain how they are addressing the problem with their open source deepchecks library and how you can start using it today to build trust in your machine learning applications.
Announcements
  • Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
  • Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.
  • Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!
  • Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!
  • Your host is Tobias Macey and today I’m interviewing Shir Chorev and Philip Tannor about Deepchecks, a Python package for comprehensively validating your machine learning models and data with minimal effort.
Interview
  • Introduction
  • How did you get involved in machine learning?
  • Can you describe what Deepchecks is and the story behind it?
  • Who is the target audience for the project? 
    • What are the biggest challenges that these users face in bringing ML models from concept to production and how does DeepChecks address those problems?
  • In the absence of DeepChecks how are practitioners solving the problems of model validation and comparison across iteratiosn? 
    • What are some of the other tools in this ecosystem and what are the differentiating features of DeepChecks?
  • What are some examples of the kinds of tests that are useful for understanding the "correctness" of models? 
    • What are the methods by which ML engineers/data scientists/domain experts can define what "correctness" means in a given model or subject area?
  • In software engineering the categories of tests are tiered as unit -> integration -> end-to-end. What are the relevant categories of tests that need to be built for validating the behavior of machine learning models?
  • How do model monitoring utilities overlap with the kinds of tests that you are building with deepchecks?
  • Can you describe how the DeepChecks package is implemented? 
    • How have the design and goals of the project changed or evolved from when you started working on it?
    • What are the assumptions that you have built up from your own experiences that have been challenged by your early users and design partners?
  • Can you describe the workflow for an individual or team using DeepChecks as part of their model training and deployment lifecycle?
  • Test engineering is a deep discipline in its own right. How have you approached the user experience and API design to reduce the overhead for ML practitioners to adopt good practices?
  • What are the interfaces available for creating reusable tests and composing test suites together?
  • What are the additional services/capabilities that you are providing in your commercial offering? 
    • How are you managing the governance and sustainability of the OSS project and balancing that against the needs/priorities of the business?
  • What are the most interesting, innovative, or unexpected ways that you have seen DeepChecks used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on DeepChecks?
  • When is DeepChecks the wrong choice?
  • What do you have planned for the future of DeepChecks?
Contact Info
  • Shir 
    • LinkedIn
    • shir22 on GitHub
  • Philip 
    • LinkedIn
    • @philiptannor on Twitter
Parting Question
  • From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
  • Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
  • DeepChecks
  • Random Forest
  • Talpiot Program
  • SHAP
    • Podcast.__init__ Episode
  • Airflow
  • Great Expectations
    • Data Engineering Podcast Episode
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
...more
View all episodesView all episodes
Download on the App Store

AI Engineering PodcastBy Tobias Macey

  • 4.3
  • 4.3
  • 4.3
  • 4.3
  • 4.3

4.3

6 ratings


More shows like AI Engineering Podcast

View all
The Cloudcast by Massive Studios

The Cloudcast

153 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

994 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

629 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

296 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

322 Listeners

Data Engineering Podcast by Tobias Macey

Data Engineering Podcast

139 Listeners

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion by AI & Data Today

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

144 Listeners

Practical AI by Practical AI LLC

Practical AI

189 Listeners

The Stack Overflow Podcast by The Stack Overflow Podcast

The Stack Overflow Podcast

63 Listeners

Last Week in AI by Skynet Today

Last Week in AI

281 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

124 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

423 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners