Hey, it’s Marek.
I’m recording this from Prague, just before a public lecture on AI, in which I am going to make a claim that the Astronomical Clock was the OG digital minion.
On the way to Czechia, ChatGPT offered me a sightseeing itinerary. 90% accurate, which sounds great until you imagine chaining it with a ticket-booking agent (also 90% accurate) and a reservation system (also 90% accurate).
0.9 × 0.9 × 0.9 = 0.73
Suddenly, my “smart” day in Prague is only 73% likely to work. Add a few more agents to handle transport, restaurant bookings, and museum timings, and I’d be wandering the Charles Bridge (or rather, sipping a Pilsner at Lokál) and wondering why nothing went to plan.
So I called a friend instead. One human. 100% reliable at showing me his city. Plus, he took me for a running tour around town. Something that ChatGPT could never do!
In this week’s audio edition, in which I read the most recent post of The Economy of Algorithms, I explore why AI agent chains fail where copilots succeed. It’s about compound failure: the evil twin of compound interest that nobody in Silicon Valley wants to discuss.
The math is brutal: String together 10 agents at 90% reliability each, and you get 35% success. Suddenly, your AI workflow is not a tool but a lottery ticket.
Listen for:
* Why your occasionally high colleague is a perfect AI metaphor
* The 10-90-35 rule that should terrify CTOs
* Why copilots are thriving while autonomous agents collect dust
* How to break the compound failure chain (hint: it involves actual humans)
Stay curious,Marek
P.S. The Czechs celebrate Jára Cimrman, patron saint of “almosts.” He missed the North Pole by seven meters and almost invented the Internet. Agent chains feel the same: impressive on paper, but when it counts, they stop just short. At least right now.
P.P.S. The Pilsner at Lokál was glorious.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit marekkowal.substack.com