
Sign up to save your podcasts
Or
[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,367 Listeners
2,397 Listeners
7,779 Listeners
4,103 Listeners
87 Listeners
1,442 Listeners
8,778 Listeners
89 Listeners
355 Listeners
5,370 Listeners
15,053 Listeners
460 Listeners
126 Listeners
64 Listeners
432 Listeners