
Sign up to save your podcasts
Or
Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks. Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract
Prithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment.
// Bio
Raj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems.
Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab.
// Related Links
Website: https://www.databricks.com/
~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
Join our Slack community [https://go.mlops.community/slack]
Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]
Sign up for the next meetup: [https://go.mlops.community/register]
MLOps Swag/Merch: [https://shop.mlops.community/]
Connect with Demetrios on LinkedIn: /dpbrinkm
Connect with Raj on LinkedIn: /rajammanabrolu
Timestamps:
[00:00] Raj's preferred coffee
[00:36] Takeaways
[01:02] Tao Naming Decision
[04:19] No Labels Machine Learning
[08:09] Tao and TAO breakdown
[13:20] Reward Model Fine-Tuning
[18:15] Training vs Inference Compute
[22:32] Retraining and Model Drift
[29:06] Prompt Tuning vs Fine-Tuning
[34:32] Small Model Optimization Strategies
[37:10] Small Model Potential
[43:08] Fine-tuning Model Differences
[46:02] Mistral Model Freedom
[53:46] Wrap up
4.9
2020 ratings
Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks. Join the Community: https://go.mlops.community/YTJoinIn
Get the newsletter: https://go.mlops.community/YTNewsletter // Abstract
Prithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment.
// Bio
Raj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems.
Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab.
// Related Links
Website: https://www.databricks.com/
~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
Join our Slack community [https://go.mlops.community/slack]
Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]
Sign up for the next meetup: [https://go.mlops.community/register]
MLOps Swag/Merch: [https://shop.mlops.community/]
Connect with Demetrios on LinkedIn: /dpbrinkm
Connect with Raj on LinkedIn: /rajammanabrolu
Timestamps:
[00:00] Raj's preferred coffee
[00:36] Takeaways
[01:02] Tao Naming Decision
[04:19] No Labels Machine Learning
[08:09] Tao and TAO breakdown
[13:20] Reward Model Fine-Tuning
[18:15] Training vs Inference Compute
[22:32] Retraining and Model Drift
[29:06] Prompt Tuning vs Fine-Tuning
[34:32] Small Model Optimization Strategies
[37:10] Small Model Potential
[43:08] Fine-tuning Model Differences
[46:02] Mistral Model Freedom
[53:46] Wrap up
272 Listeners
481 Listeners
624 Listeners
443 Listeners
296 Listeners
323 Listeners
142 Listeners
266 Listeners
189 Listeners
64 Listeners
88 Listeners
122 Listeners
77 Listeners
30 Listeners
52 Listeners