
Sign up to save your podcasts
Or
[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,331 Listeners
2,382 Listeners
8,007 Listeners
4,126 Listeners
90 Listeners
1,479 Listeners
9,216 Listeners
91 Listeners
424 Listeners
5,437 Listeners
15,363 Listeners
501 Listeners
129 Listeners
72 Listeners
464 Listeners