
Sign up to save your podcasts
Or


Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter
While machine learning is spreading like wildfire, very little attention has been paid to the ways that it can go wrong when moving from development to production. Even when models work perfectly, they can be attacked and/or degrade quickly if the data changes. Having a well-understood MLOps process is necessary for ML security!
Using Kubeflow, we demonstrated how the common ways machine learning workflows go wrong, and how to mitigate them using MLOps pipelines to provide reproducibility, validation, versioning/tracking, and safe/compliant deployment. We also talked about the direction for MLOps as an industry, and how we can use it to move faster, with less risk, than ever before.
David leads Open Source Machine Learning Strategy at Azure. This means he spends most of his time helping humans convince machines to be smarter. He is only moderately successful at this. Previously, he led product management for Kubernetes on behalf of Google, launched Google Kubernetes Engine, and co-founded the Kubeflow project. He has also worked at Microsoft, Amazon, and Chef and co-founded three startups. When not spending too much time in the service of electrons, he can be found on a mountain (on skis), traveling the world (via restaurants), or participating in kid activities, of which there are a lot more than he remembers than when he was that age.
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with David on LinkedIn: https://www.linkedin.com/in/aronchick/
By Demetrios4.6
2323 ratings
Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter
While machine learning is spreading like wildfire, very little attention has been paid to the ways that it can go wrong when moving from development to production. Even when models work perfectly, they can be attacked and/or degrade quickly if the data changes. Having a well-understood MLOps process is necessary for ML security!
Using Kubeflow, we demonstrated how the common ways machine learning workflows go wrong, and how to mitigate them using MLOps pipelines to provide reproducibility, validation, versioning/tracking, and safe/compliant deployment. We also talked about the direction for MLOps as an industry, and how we can use it to move faster, with less risk, than ever before.
David leads Open Source Machine Learning Strategy at Azure. This means he spends most of his time helping humans convince machines to be smarter. He is only moderately successful at this. Previously, he led product management for Kubernetes on behalf of Google, launched Google Kubernetes Engine, and co-founded the Kubeflow project. He has also worked at Microsoft, Amazon, and Chef and co-founded three startups. When not spending too much time in the service of electrons, he can be found on a mountain (on skis), traveling the world (via restaurants), or participating in kid activities, of which there are a lot more than he remembers than when he was that age.
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with David on LinkedIn: https://www.linkedin.com/in/aronchick/

1,084 Listeners

626 Listeners

302 Listeners

332 Listeners

145 Listeners

226 Listeners

209 Listeners

95 Listeners

503 Listeners

133 Listeners

225 Listeners

35 Listeners

21 Listeners

39 Listeners

64 Listeners