
Sign up to save your podcasts
Or


This week the crew flips the script on their usual AI conversations and gets honest about what AI adoption actually looks like inside larger tech orgs — not the Twitter highlight reel.
Matt kicks things off with a tweet from Dax (of OpenCode / formerly SST) arguing that companies are talking about their teams as if they were operating at peak efficiency and merely bottlenecked by typing speed.
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code
The reality Dax describes:
Matt admits a lot of this hits uncomfortably close to home at HubSpot.
Dillon shares a stat from a recent all-hands at his company: ~$500K/year in AI spend across roughly 200 engineers. It didn't sound like much in the room, but the more he sits with it, the weirder it feels — especially since leadership seems totally fine with it. The crew riffs on the bizarre new world where engineers may soon need to negotiate not just salary but a personal AI token budget as part of their comp package.
Dillon makes a confession: he's mostly using AI because it's free and he's being asked to. If the company pulled the plug tomorrow, he'd happily go back to free models and probably be fine.
Dillon drops the spicy take of the episode: AI right now is basically a really fancy search engine that copies and pastes code for you — it's just stealing the internet's content and wrapping it in a bow. Scott mostly agrees, framing it as a streamlined command-line search that returns blurbs instead of links.
Matt pushes back. Tab completion? Sure, that framing fits. But agents like Claude Code in plan mode are doing something more — decomposing problems into small enough sub-problems that the "search engine" framing starts to break down. Scott concedes that plan mode and the conversational back-and-forth ("give me three ways to solve this and tell me which is strongest") is genuinely valuable in a way no search engine can replicate.
The conversation lands on the real cost of all this: engineers shipping Claude's output without understanding it, leaving in the "I did a thing" comments, and quietly building up tech debt that the few engineers who still care will eventually have to clean up. Scott talks about his own discipline of never shipping code he doesn't understand well enough to defend in a sev review.
The Super Bowl MVP was wrong. It should've been the kicker — who, per Dillon and Matt, basically kept his team in the game. This unlocks some deep lore: Dillon himself was a kicker in high school. Suddenly his entire personality makes sense.
By Matt Hamlin, Dillon Curry & Scott KayeThis week the crew flips the script on their usual AI conversations and gets honest about what AI adoption actually looks like inside larger tech orgs — not the Twitter highlight reel.
Matt kicks things off with a tweet from Dax (of OpenCode / formerly SST) arguing that companies are talking about their teams as if they were operating at peak efficiency and merely bottlenecked by typing speed.
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code
The reality Dax describes:
Matt admits a lot of this hits uncomfortably close to home at HubSpot.
Dillon shares a stat from a recent all-hands at his company: ~$500K/year in AI spend across roughly 200 engineers. It didn't sound like much in the room, but the more he sits with it, the weirder it feels — especially since leadership seems totally fine with it. The crew riffs on the bizarre new world where engineers may soon need to negotiate not just salary but a personal AI token budget as part of their comp package.
Dillon makes a confession: he's mostly using AI because it's free and he's being asked to. If the company pulled the plug tomorrow, he'd happily go back to free models and probably be fine.
Dillon drops the spicy take of the episode: AI right now is basically a really fancy search engine that copies and pastes code for you — it's just stealing the internet's content and wrapping it in a bow. Scott mostly agrees, framing it as a streamlined command-line search that returns blurbs instead of links.
Matt pushes back. Tab completion? Sure, that framing fits. But agents like Claude Code in plan mode are doing something more — decomposing problems into small enough sub-problems that the "search engine" framing starts to break down. Scott concedes that plan mode and the conversational back-and-forth ("give me three ways to solve this and tell me which is strongest") is genuinely valuable in a way no search engine can replicate.
The conversation lands on the real cost of all this: engineers shipping Claude's output without understanding it, leaving in the "I did a thing" comments, and quietly building up tech debt that the few engineers who still care will eventually have to clean up. Scott talks about his own discipline of never shipping code he doesn't understand well enough to defend in a sev review.
The Super Bowl MVP was wrong. It should've been the kicker — who, per Dillon and Matt, basically kept his team in the game. This unlocks some deep lore: Dillon himself was a kicker in high school. Suddenly his entire personality makes sense.