MLOps.community

All About Evaluating LLM Applications // Shahul Es // #179


Listen Later

MLOps Coffee Sessions #179 with Shahul Es, All About Evaluating LLM Applications.


// Abstract

Shahul Es, renowned for his expertise in the evaluation space and is the creator of the Ragas Project. Shahul dives deep into the world of evaluation in open source models, sharing insights on debugging, troubleshooting, and the challenges faced when it comes to benchmarks. From the importance of custom data distributions to the role of fine-tuning in enhancing model performance, this episode is packed with valuable information for anyone interested in language models and AI.


// Bio

Shahul is a data science professional with 6+ years of expertise and has worked in data domains from structured, NLP, to Audio processing. He is also a Kaggle GrandMaster and code owner/ ML of the Open-Assistant initiative that released some of the best open-source alternatives to ChatGPT.


// MLOps Jobs board

jobs.mlops.community

// MLOps Swag/Merch

https://mlops-community.myshopify.com/


// Related Links

All about evaluating Large language models blog: https://explodinggradients.com/all-about-evaluating-large-language-models

Ragas: https://github.com/explodinggradients/ragas


--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/


Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Shahul on LinkedIn: https://www.linkedin.com/in/shahules/


Timestamps:

[00:00] Shahul's preferred coffee

[00:20] Takeaways

[01:46] Please like, share, and subscribe to our MLOps channels!

[02:07] Shahul's definition of Evaluation

[03:27] Evaluation metrics and Benchmarks

[05:46] Gamed leaderboards

[10:13] Best at summarizing long text open-source models

[11:12] Benchmarks

[14:20] Recommending the evaluation process

[17:43] LLMs for other LLMs

[20:40] Debugging failed evaluation models

[24:25] Prompt injection

[27:32] Alignment

[32:45] Open Assist

[35:51] Garbage in, garbage out

[37:00] Ragas

[42:52] Valuable use case besides OpenAI

[45:11] Fine-tuning LLMs

[49:07] Connect with Shahul if you need help with Ragas @Shahules786 on Twitter

[49:58] Wrap up

...more
View all episodesView all episodes
Download on the App Store

MLOps.communityBy Demetrios

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

23 ratings


More shows like MLOps.community

View all
This Week in Startups by Jason Calacanis

This Week in Startups

1,296 Listeners

The Changelog: Software Development, Open Source by Changelog Media

The Changelog: Software Development, Open Source

288 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,105 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

626 Listeners

Talk Python To Me by Michael Kennedy

Talk Python To Me

583 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

306 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

343 Listeners

Practical AI by Practical AI LLC

Practical AI

212 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

551 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

512 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

150 Listeners

Latent Space: The AI Engineer Podcast by Latent.Space

Latent Space: The AI Engineer Podcast

101 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

228 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

688 Listeners

AI + a16z by a16z

AI + a16z

34 Listeners