
Sign up to save your podcasts
Or


This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from "training wheels" to intelligence amplifiers, and why she believes we're closer to transformative long-running AI than most people think. She also discusses Anthropic's distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn't model capability anymore but product innovation.
(0:00) Intro
(0:57) Starting the Work on Opus 4.5
(2:04) Model Capabilities and Surprises
(5:59) Computer Use and Practical Applications
(7:21) Pricing and Positioning
(10:02) Customer Feedback and Early Access
(16:44) The Reality of Enterprise Agents
(18:47) Future of AI and Long-Running Intelligence
(28:06) Anthropic's Culture and Decision Making
(30:31) Key Decisions and Fun Moments
(33:45) Quickfire
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
By by Redpoint Ventures4.9
5151 ratings
This episode features Dianne Na Penn, a senior product leader at Anthropic, discussing the launch of Claude Opus 4.5 and the evolution of frontier AI models. The conversation explores how Anthropic approaches model development—balancing ambitious capability roadmaps with user feedback, making strategic bets on areas like agentic coding and computer use while deliberately avoiding others like image generation. Dianne shares insights on the shifting nature of AI evaluation (moving beyond saturated benchmarks like SWE-bench toward more open-ended measures), the evolution of scaffolding from "training wheels" to intelligence amplifiers, and why she believes we're closer to transformative long-running AI than most people think. She also discusses Anthropic's distinctive culture of authenticity, the under appreciated benefits of model alignment for producing independent-thinking AI, and why the real bottleneck to AI agents isn't model capability anymore but product innovation.
(0:00) Intro
(0:57) Starting the Work on Opus 4.5
(2:04) Model Capabilities and Surprises
(5:59) Computer Use and Practical Applications
(7:21) Pricing and Positioning
(10:02) Customer Feedback and Early Access
(16:44) The Reality of Enterprise Agents
(18:47) Future of AI and Long-Running Intelligence
(28:06) Anthropic's Culture and Decision Making
(30:31) Key Decisions and Fun Moments
(33:45) Quickfire
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint

531 Listeners

1,099 Listeners

2,357 Listeners

226 Listeners

10,017 Listeners

97 Listeners

522 Listeners

502 Listeners

133 Listeners

93 Listeners

472 Listeners

35 Listeners

121 Listeners

42 Listeners

52 Listeners