
Sign up to save your podcasts
Or


At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap
By Conviction4.4
119119 ratings
At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap

1,288 Listeners

537 Listeners

1,089 Listeners

435 Listeners

226 Listeners

95 Listeners

511 Listeners

188 Listeners

94 Listeners

467 Listeners

35 Listeners

21 Listeners

40 Listeners

44 Listeners

47 Listeners