
Sign up to save your podcasts
Or


Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.
In this episode you'll hear:
Our new show
This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].
Timestamps:
(00:00) Episode trailer
(01:27) Jared's updated prediction for reaching human-level intelligence
(08:12) What will limit scaling laws?
(11:13) How long will we wait between model generations?
(16:27) Why test-time scaling is a big deal
(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically
(25:31) Has Anthropic changed their approach to safety vs speed?
(30:08) Managing the paradoxes of AI progress
(32:21) Can interpretability and monitoring really keep AI safe?
(39:43) Are model incentives misaligned with public interests?
(42:36) How should we prepare for electricity-level impact?
(51:15) What Jared is most excited about in the next 12 months
Jared's links:
Azeem's links:
Produced by supermix.io
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By Azeem Azhar4.9
606606 ratings
Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.
In this episode you'll hear:
Our new show
This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].
Timestamps:
(00:00) Episode trailer
(01:27) Jared's updated prediction for reaching human-level intelligence
(08:12) What will limit scaling laws?
(11:13) How long will we wait between model generations?
(16:27) Why test-time scaling is a big deal
(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically
(25:31) Has Anthropic changed their approach to safety vs speed?
(30:08) Managing the paradoxes of AI progress
(32:21) Can interpretability and monitoring really keep AI safe?
(39:43) Are model incentives misaligned with public interests?
(42:36) How should we prepare for electricity-level impact?
(51:15) What Jared is most excited about in the next 12 months
Jared's links:
Azeem's links:
Produced by supermix.io
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

394 Listeners

2,679 Listeners

1,086 Listeners

1,839 Listeners

194 Listeners

3,981 Listeners

1,378 Listeners

745 Listeners

1,251 Listeners

169 Listeners

199 Listeners

507 Listeners

487 Listeners

259 Listeners

64 Listeners

679 Listeners

135 Listeners

35 Listeners

38 Listeners