
Sign up to save your podcasts
Or


Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world.
(00:00) Intro - meet Harrison Chase
(01:32) What changed in agents over the last year
(03:57) Why coding agents are ahead
(06:26) Do models commoditize the framework layer?
(08:27) Harnesses, in plain English
(10:11) Why system prompts matter so much
(13:11) The upside — and downside — of subagents
(15:31) Why a useful agent needs a filesystem
(18:13) The core primitives of modern agents
(19:12) Skills: the new primitive
(20:19) What context compaction actually means
(23:02) How memory works in agents
(25:16) One mega-agent or many specialized agents?
(27:46) Has MCP won?
(29:38) Why agents need sandboxes
(32:35) How sandboxes help with security
(33:32) How Harrison Chase started LangChain
(37:24) LangChain vs LangGraph vs Deep Agents
(40:17) Why observability matters more for agents
(41:48) Evals, no-code, and continuous improvement
(44:41) What LangChain is building next
(45:29) Where the real moat in AI lives
By Matt Turck5
2424 ratings
Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world.
(00:00) Intro - meet Harrison Chase
(01:32) What changed in agents over the last year
(03:57) Why coding agents are ahead
(06:26) Do models commoditize the framework layer?
(08:27) Harnesses, in plain English
(10:11) Why system prompts matter so much
(13:11) The upside — and downside — of subagents
(15:31) Why a useful agent needs a filesystem
(18:13) The core primitives of modern agents
(19:12) Skills: the new primitive
(20:19) What context compaction actually means
(23:02) How memory works in agents
(25:16) One mega-agent or many specialized agents?
(27:46) Has MCP won?
(29:38) Why agents need sandboxes
(32:35) How sandboxes help with security
(33:32) How Harrison Chase started LangChain
(37:24) LangChain vs LangGraph vs Deep Agents
(40:17) Why observability matters more for agents
(41:48) Evals, no-code, and continuous improvement
(44:41) What LangChain is building next
(45:29) Where the real moat in AI lives

538 Listeners

1,107 Listeners

2,344 Listeners

615 Listeners

232 Listeners

10,244 Listeners

100 Listeners

554 Listeners

512 Listeners

148 Listeners

102 Listeners

34 Listeners

96 Listeners

139 Listeners

41 Listeners