
Sign up to save your podcasts
Or
How do we really measure what AI can do—not just how well it performs on a test? In this episode of The Deep Dive, we explore an eye-opening new study that rethinks the way we evaluate AI progress. Forget percent scores and standardized benchmarks—this research introduces a new and surprisingly intuitive concept: the “time horizon” of AI. It asks a simple but powerful question: how long of a real-world task can an AI model complete with at least a 50% chance of success?
We break down the study’s fascinating findings, including how researchers used over 800 recordings of human professionals tackling tasks that ranged from quick coding fixes to 30-hour software projects. Then they put top AI models—like Claude 3.7 and others—to the test. The result? A striking discovery: AI's ability to complete longer tasks has been doubling every 7 months since 2019.
We explore what this means for fields like software development, where AI might be able to complete a month’s worth of work in a single task within just five years. But it's not just about task length—it's about complexity. The researchers introduce “messiness factors” to capture how AI handles the kind of unpredictable, real-world challenges humans deal with daily. And surprisingly, AI is improving on those, too.
Tune in as we unpack what this means for the future of work, creativity, and the role of human expertise in an AI-accelerating world. Will AI soon operate like a junior employee? Can it tackle dynamic, messy tasks with confidence? And what might this all mean for your job five years from now?
If you’re curious about what “AI progress” really looks like—and how fast it's moving—this episode is your roadmap.
Read more: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
How do we really measure what AI can do—not just how well it performs on a test? In this episode of The Deep Dive, we explore an eye-opening new study that rethinks the way we evaluate AI progress. Forget percent scores and standardized benchmarks—this research introduces a new and surprisingly intuitive concept: the “time horizon” of AI. It asks a simple but powerful question: how long of a real-world task can an AI model complete with at least a 50% chance of success?
We break down the study’s fascinating findings, including how researchers used over 800 recordings of human professionals tackling tasks that ranged from quick coding fixes to 30-hour software projects. Then they put top AI models—like Claude 3.7 and others—to the test. The result? A striking discovery: AI's ability to complete longer tasks has been doubling every 7 months since 2019.
We explore what this means for fields like software development, where AI might be able to complete a month’s worth of work in a single task within just five years. But it's not just about task length—it's about complexity. The researchers introduce “messiness factors” to capture how AI handles the kind of unpredictable, real-world challenges humans deal with daily. And surprisingly, AI is improving on those, too.
Tune in as we unpack what this means for the future of work, creativity, and the role of human expertise in an AI-accelerating world. Will AI soon operate like a junior employee? Can it tackle dynamic, messy tasks with confidence? And what might this all mean for your job five years from now?
If you’re curious about what “AI progress” really looks like—and how fast it's moving—this episode is your roadmap.
Read more: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/