
Sign up to save your podcasts
Or


The engineers most resistant to AI coding tools are not the junior ones. Chris Kelly, Head of Product at Augment Code, has watched senior engineers, people with 20 years of experience shipping production systems, be the last to adopt. The reason is not fear of job loss. They were trained to build deterministic systems where A plus B always equals C, and a non-deterministic model that occasionally writes wrong code breaks a contract they have never had to question.
The quality gap is not a model problem, it is a context problem. Most teams point agents at a codebase without giving them the same linters, test suites, and tooling a human engineer relies on, then wonder why the output does not hold up in production. Chris breaks down the exact daily workflow he runs, where he writes almost no individual lines of code himself, how semantic retrieval changes what an agent actually understands about your codebase versus basic file search, and why the bottleneck for non-technical leaders compressing dev timelines was never the coding to begin with.
Topics Discussed:
Why senior engineers are last to adopt AI coding tools
Giving agents linters and test suites to close the production quality gap
Semantic codebase retrieval vs. grepping as a context strategy
Chris's continuous code review workflow replacing individual code writing
Why coding was never the long part and what actually compresses with AI
Skill atrophy risk for engineers skipping hands-on coding experience
Code review as the highest-leverage engineering skill to hire for now
By Cadre AIThe engineers most resistant to AI coding tools are not the junior ones. Chris Kelly, Head of Product at Augment Code, has watched senior engineers, people with 20 years of experience shipping production systems, be the last to adopt. The reason is not fear of job loss. They were trained to build deterministic systems where A plus B always equals C, and a non-deterministic model that occasionally writes wrong code breaks a contract they have never had to question.
The quality gap is not a model problem, it is a context problem. Most teams point agents at a codebase without giving them the same linters, test suites, and tooling a human engineer relies on, then wonder why the output does not hold up in production. Chris breaks down the exact daily workflow he runs, where he writes almost no individual lines of code himself, how semantic retrieval changes what an agent actually understands about your codebase versus basic file search, and why the bottleneck for non-technical leaders compressing dev timelines was never the coding to begin with.
Topics Discussed:
Why senior engineers are last to adopt AI coding tools
Giving agents linters and test suites to close the production quality gap
Semantic codebase retrieval vs. grepping as a context strategy
Chris's continuous code review workflow replacing individual code writing
Why coding was never the long part and what actually compresses with AI
Skill atrophy risk for engineers skipping hands-on coding experience
Code review as the highest-leverage engineering skill to hire for now