MLOps Coffee Sessions #93 with Krishnaram Kenthapadi, Model Monitoring in Practice: Top Trends, co-hosted by Mihail Eric.
Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter
// Abstract
We first motivate the need for ML model monitoring, as part of a broader AI model governance and responsible AI framework, and provide a roadmap for thinking about model monitoring in practice.
We then present findings and insights on model monitoring in practice based on interviews with various ML practitioners spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants.
// Bio
Krishnaram Kenthapadi is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in the Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team and served as LinkedIn’s representative on Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Previously, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted). He has presented tutorials on privacy, fairness, explainable AI, and responsible AI at forums such as KDD ’18 ’19, WSDM ’19, WWW ’19 ’20 '21, FAccT ’20 '21, AAAI ’20 '21, and ICML '21.
// MLOps Jobs board
jobs.mlops.community
// Related Links
Website: https://cs.stanford.edu/people/kngk/
https://sites.google.com/view/ResponsibleAITutorial
https://sites.google.com/view/explainable-ai-tutorial
https://sites.google.com/view/fairness-tutorial
https://sites.google.com/view/privacy-tutorial
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Mihail on LinkedIn: https://www.linkedin.com/in/mihaileric/
Connect with Krishnaram on LinkedIn: https://www.linkedin.com/in/krishnaramkenthapadi
Timestamps:
[00:00] Introduction to Krishnaram Kenthapadi
[02:22] Takeaways
[04:55] Thank you, Fiddler AI, for sponsoring this episode!
[05:15] Struggles in Explainable AI
[08:30] Explainable AI prominence
[09:56] Importance of a password manager and actual security
[14:27] Role of Education in Explainable AI systems
[18:52] Highly regulated domains in other sectors
[21:12] First machine learning wins
[23:36] Model monitoring
[25:35] Interests in ML monitoring and Explainability
[29:57] Non-technical stakeholders' voice
[33:54] Advice to ML practitioners to address organizational concerns
[38:49] Ethically sourced data set
[42:15] Crowd-sourced labor
[46:29] Tension in practice
[50:09] Wrap up