
Sign up to save your podcasts
Or


At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap
By Conviction4.3
124124 ratings
At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap

1,286 Listeners

534 Listeners

1,095 Listeners

438 Listeners

2,346 Listeners

225 Listeners

198 Listeners

311 Listeners

95 Listeners

531 Listeners

98 Listeners

473 Listeners

34 Listeners

122 Listeners

41 Listeners