
Sign up to save your podcasts
Or


[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrong[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers already familiar with the METR Measuring AI Ability to Complete Long Tasks paper: this post highlights some gaps between the measurements used in the paper and real-world work – gaps which are discussed in the paper, but have often been overlooked in subsequent discussion.]
It's difficult to measure progress in AI, despite the slew of benchmark scores that accompany each new AI model.
Benchmark scores don’t provide much perspective, because we keep having to change measurement systems. Almost as soon as a benchmark is introduced, it becomes saturated – models learn to ace the test. So someone introduces a more difficult benchmark, whose scores aren’t comparable to the old one. There's nothing to draw a long-term trend line on.
[...]
---
Outline:
(01:47) We're Gonna Need a Harder Test
(03:23) Grading AIs on a Consistent Curve
(06:37) How Applicable to the Real World are These Results?
(13:50) What the METR Study Tells Us About AGI Timelines
(16:14) Recent Models Have Been Ahead of the Curve
(18:20) We're Running Out Of Artificial Tasks
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,366 Listeners

2,438 Listeners

8,995 Listeners

4,150 Listeners

92 Listeners

1,597 Listeners

9,913 Listeners

90 Listeners

71 Listeners

5,471 Listeners

16,096 Listeners

536 Listeners

131 Listeners

95 Listeners

519 Listeners