
Sign up to save your podcasts
Or


Summary: I built a simple back-of-the-envelope model of AI agent economics that combines Ord's half-life analysis of agent reliability with real inference costs. The core idea is that agent cost per successful outcome scales exponentially with task length, while human cost scales linearly. This creates a sharp viability boundary that cost reductions alone cannot meaningfully shift. The only parameter that matters much is the agent's half-life (reliability horizon), which is precisely the thing that requires the continual learning breakthrough (which I think is essential for AGI-level agents) that some place 5-20 years away. I think this has underappreciated implications for the $2T+ AI infrastructure investment thesis.
The setup
Toby Ord's "Half-Life" analysis (2025) demonstrated that AI agent success rates on tasks decay exponentially with task length, following a pattern analogous to radioactive decay. If an agent completes a 1-hour task with 50% probability, it completes a 2-hour task with roughly 25% probability and a 4-hour task with about 6%. There is a constant per-step failure probability, and because longer tasks chain more steps, success decays exponentially.
METR's 2025 data showed the 50% time horizon for the best agents was roughly 2.5-5 hours (model-dependent) and had been doubling every ~7 [...]
---
Outline:
(00:57) The setup
(02:04) The model
(03:26) Results: base case
(05:01) Finding 1: cost reductions cannot beat the exponential
(06:24) Finding 2: the half-life is the whole game
(08:02) Finding 3: task decomposition helps but has limits
(09:33) What this means for the investment thesis
(11:38) Interactive model
(11:57) Caveats and limitations
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By EA Forum TeamSummary: I built a simple back-of-the-envelope model of AI agent economics that combines Ord's half-life analysis of agent reliability with real inference costs. The core idea is that agent cost per successful outcome scales exponentially with task length, while human cost scales linearly. This creates a sharp viability boundary that cost reductions alone cannot meaningfully shift. The only parameter that matters much is the agent's half-life (reliability horizon), which is precisely the thing that requires the continual learning breakthrough (which I think is essential for AGI-level agents) that some place 5-20 years away. I think this has underappreciated implications for the $2T+ AI infrastructure investment thesis.
The setup
Toby Ord's "Half-Life" analysis (2025) demonstrated that AI agent success rates on tasks decay exponentially with task length, following a pattern analogous to radioactive decay. If an agent completes a 1-hour task with 50% probability, it completes a 2-hour task with roughly 25% probability and a 4-hour task with about 6%. There is a constant per-step failure probability, and because longer tasks chain more steps, success decays exponentially.
METR's 2025 data showed the 50% time horizon for the best agents was roughly 2.5-5 hours (model-dependent) and had been doubling every ~7 [...]
---
Outline:
(00:57) The setup
(02:04) The model
(03:26) Results: base case
(05:01) Finding 1: cost reductions cannot beat the exponential
(06:24) Finding 2: the half-life is the whole game
(08:02) Finding 3: task decomposition helps but has limits
(09:33) What this means for the investment thesis
(11:38) Interactive model
(11:57) Caveats and limitations
---
First published:
Source:
---
Narrated by TYPE III AUDIO.