MLOps.community

Tricks to Fine Tuning // Prithviraj Ammanabrolu // #318


Listen Later

Tricks to Fine Tuning // MLOps Podcast #318 with Prithviraj Ammanabrolu, Research Scientist at Databricks.


Join the Community: https://go.mlops.community/YTJoinIn

Get the newsletter: https://go.mlops.community/YTNewsletter


// Abstract

Prithviraj Ammanabrolu drops by to break down Tao fine-tuning—a clever way to train models without labeled data. Using reinforcement learning and synthetic data, Tao teaches models to evaluate and improve themselves. Raj explains how this works, where it shines (think small models punching above their weight), and why it could be a game-changer for efficient deployment.


// Bio

Raj is an Assistant Professor of Computer Science at the University of California, San Diego, leading the PEARLS Lab in the Department of Computer Science and Engineering (CSE). He is also a Research Scientist at Mosaic AI, Databricks, where his team is actively recruiting research scientists and engineers with expertise in reinforcement learning and distributed systems.

Previously, he was part of the Mosaic team at the Allen Institute for AI. He earned his PhD in Computer Science from the School of Interactive Computing at Georgia Tech, advised by Professor Mark Riedl in the Entertainment Intelligence Lab.


// Related Links

Website: https://www.databricks.com/


~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~

Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore

Join our Slack community [https://go.mlops.community/slack]

Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]

Sign up for the next meetup: [https://go.mlops.community/register]

MLOps Swag/Merch: [https://shop.mlops.community/]

Connect with Demetrios on LinkedIn: /dpbrinkm

Connect with Raj on LinkedIn: /rajammanabrolu


Timestamps:

[00:00] Raj's preferred coffee

[00:36] Takeaways

[01:02] Tao Naming Decision

[04:19] No Labels Machine Learning

[08:09] Tao and TAO breakdown

[13:20] Reward Model Fine-Tuning

[18:15] Training vs Inference Compute

[22:32] Retraining and Model Drift

[29:06] Prompt Tuning vs Fine-Tuning

[34:32] Small Model Optimization Strategies

[37:10] Small Model Potential

[43:08] Fine-tuning Model Differences

[46:02] Mistral Model Freedom

[53:46] Wrap up

...more
View all episodesView all episodes
Download on the App Store

MLOps.communityBy Demetrios

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

21 ratings


More shows like MLOps.community

View all
Data Skeptic by Kyle Polich

Data Skeptic

478 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,087 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

623 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

301 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

342 Listeners

Data Engineering Podcast by Tobias Macey

Data Engineering Podcast

146 Listeners

Practical AI by Practical AI LLC

Practical AI

211 Listeners

Last Week in AI by Skynet Today

Last Week in AI

302 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

96 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

556 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners