
Sign up to save your podcasts
Or
How do you know if a Large Language Model is good for your specific task? You benchmark it! In this episode, Allen speaks with Amy Russ about her fascinating career path from international affairs to data, and how that unique perspective now informs her work in LLM benchmarking.
Amy explains what benchmarking is, why it's crucial for both model builders and app developers, and how it goes far beyond simple technical tests to include societal, cultural, and ethical considerations like preventing harms.
Learn about the complex process involving diverse teams, defining fuzzy criteria, and the technical tools used, including data versioning and prompt template engines. Amy also shares insights on how to get involved in open benchmarking efforts and where to find benchmarks relevant to your own LLM projects.
Whether you're building models or using them in your applications, understanding benchmarking is key to finding and evaluating the best AI for your needs.
Learn More:
* ML Commons - https://mlcommons.org/
Timestamps:
00:18 Amy's Career Path (From Diplomacy to Data)
02:46 What Amy Does Now (Benchmarking & Policy)
03:38 Defining LLM Benchmarking
05:08 Policy & Societal Benchmarking (Preventing Harms)
07:55 The Need for Diverse Benchmarking Teams
09:55 Technical Aspects & Tooling (Data Integrity, Versioning)
10:50 Prompt Engineering & Versioning for Benchmarking
12:48 Preventing Models from Tuning to Benchmarks
15:30 Prompt Template Engines & Generating Prompts
17:10 Other Benchmarking Tools & Testing Nuances
19:10 Benchmarking Compared to Traditional QA
21:45 Evaluating Benchmark Results (Human & Metrics)
23:05 The Challenge of Establishing an Evaluation Scale
23:58 How to Get Started in Benchmarking (Volunteering, Organizations)
25:20 Open Benchmarks & Where to Find Them
26:35 Benchmarking Your Own Model or App
28:55 Why Benchmarking Matters for App Builders
29:55 Where to Learn More & Follow Amy
Hashtags:
#LLM #Benchmarking #AI #MachineLearning #GenAI #DataScience #DataEngineering #PromptEngineering #ModelEvaluation #TechPodcast #Developer #TwoVoiceDevs #MLCommons #QA
1
11 ratings
How do you know if a Large Language Model is good for your specific task? You benchmark it! In this episode, Allen speaks with Amy Russ about her fascinating career path from international affairs to data, and how that unique perspective now informs her work in LLM benchmarking.
Amy explains what benchmarking is, why it's crucial for both model builders and app developers, and how it goes far beyond simple technical tests to include societal, cultural, and ethical considerations like preventing harms.
Learn about the complex process involving diverse teams, defining fuzzy criteria, and the technical tools used, including data versioning and prompt template engines. Amy also shares insights on how to get involved in open benchmarking efforts and where to find benchmarks relevant to your own LLM projects.
Whether you're building models or using them in your applications, understanding benchmarking is key to finding and evaluating the best AI for your needs.
Learn More:
* ML Commons - https://mlcommons.org/
Timestamps:
00:18 Amy's Career Path (From Diplomacy to Data)
02:46 What Amy Does Now (Benchmarking & Policy)
03:38 Defining LLM Benchmarking
05:08 Policy & Societal Benchmarking (Preventing Harms)
07:55 The Need for Diverse Benchmarking Teams
09:55 Technical Aspects & Tooling (Data Integrity, Versioning)
10:50 Prompt Engineering & Versioning for Benchmarking
12:48 Preventing Models from Tuning to Benchmarks
15:30 Prompt Template Engines & Generating Prompts
17:10 Other Benchmarking Tools & Testing Nuances
19:10 Benchmarking Compared to Traditional QA
21:45 Evaluating Benchmark Results (Human & Metrics)
23:05 The Challenge of Establishing an Evaluation Scale
23:58 How to Get Started in Benchmarking (Volunteering, Organizations)
25:20 Open Benchmarks & Where to Find Them
26:35 Benchmarking Your Own Model or App
28:55 Why Benchmarking Matters for App Builders
29:55 Where to Learn More & Follow Amy
Hashtags:
#LLM #Benchmarking #AI #MachineLearning #GenAI #DataScience #DataEngineering #PromptEngineering #ModelEvaluation #TechPodcast #Developer #TwoVoiceDevs #MLCommons #QA
354 Listeners
3 Listeners