Gradient Dissent: Conversations on AI

Robert Nishihara — The State of Distributed Computing in ML

11.13.2020 - By Lukas BiewaldPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

The story of Ray and what lead Robert to go from reinforcement learning researcher to creating open-source tools for machine learning and beyond

Robert is currently working on Ray, a high-performance distributed execution framework for AI applications. He studied mathematics at Harvard. He’s broadly interested in applied math, machine learning, and optimization, and was a member of the Statistical AI Lab, the AMPLab/RISELab, and the Berkeley AI Research Lab at UC Berkeley.

robertnishihara.com

https://anyscale.com/

https://github.com/ray-project/ray

https://twitter.com/robertnishihara

https://www.linkedin.com/in/robert-nishihara-b6465444/

Topics covered:

0:00 sneak peak + intro

1:09 what is Ray?

3:07 Spark and Ray

5:48 reinforcement learning

8:15 non-ml use cases of ray

10:00 RL in the real world and and common uses of Ray

13:49 Ppython in ML

16:38 from grad school to ML tools company

20:40 pulling product requirements in surprising directions

23:25 how to manage a large open source community

27:05 Ray Tune

29:35 where do you see bottlenecks in production?

31:39 An underrated aspect of Machine Learning

Visit our podcasts homepage for transcripts and more episodes!

www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!

Apple Podcasts: https://bit.ly/2WdrUvI

Spotify: https://bit.ly/2SqtadF

Google: http://tiny.cc/GD_Google

Subscribe to our YouTube channel for videos of these podcasts and more Machine learning-related videos:

https://www.youtube.com/c/WeightsBiases

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:

http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:

http://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.

https://app.wandb.ai/gallery

More episodes from Gradient Dissent: Conversations on AI