Inference by Turing Post

When Will We Train Once and Learn Forever? Insights from Dev Rishi, CEO and co-founder ⁨@Predibase ​


Listen Later

What it actually takes to build models that improve over time. In this episode, I sit down with Devvret Rishi, CEO and co-founder of Predibase, to talk about the shift from static models to continuous learning loops, the rise of reinforcement fine-tuning (RFT), and why the real future of enterprise AI isn’t chatty generalists – it’s focused, specialized agents that get the job done.


We cover:

The real meaning behind "train once, learn forever"

How RFT works (and why it might replace traditional fine-tuning)

What makes inference so hard in production

Open-source model gaps—and why evaluation is still mostly vibes

Dev’s take on agentic workflows, intelligent inference, and the road ahead

If you're building with LLMs, this conversation is packed with hard-earned insights from someone who's doing the work – and shipping real systems. Dev is super structural! I really enjoyed this conversation.

Did you like the video? You know what to do:


📌 Subscribe for more deep dives with the minds shaping AI.

Leave a comment if you have something to say.

Like it if you liked it.

That’s it.

Oh yeap, one more thing: Thank you for watching and sharing this video. We truly appreciate you.


Guest:

Devvret Rishi, co-founder and CEO at Predibase

https://predibase.com/

If you don’t see a transcript, subscribe to receive our edited conversation as a newsletter: https://www.turingpost.com/subscribe


Chapters:

00:00 - Intro

00:07 - When Will We Train Once and Learn Forever?

01:04 - Reinforcement Fine-Tuning (RFT): What It Is and Why It Matters

03:37 - Continuous Feedback Loops in Production

04:38 - What's Blocking Companies From Adopting Feedback Loops?

05:40 - Upcoming Features at Predibase

06:11 - Agentic Workflows: Definition and Challenges

08:08 - Lessons From Google Assistant and Agent Design

08:27 - Balancing Product and Research in a Fast-Moving Space

10:18 - Pivoting After the ChatGPT Moment

12:53 - The Rise of Narrow AI Use Cases

14:53 - Strategic Planning in a Shifting Landscape

16:51 - Why Inference Gets Hard at Scale

20:06 - Intelligent Inference: The Next Evolution

20:41 - Gaps in the Open Source AI Stack

22:06 - How Companies Actually Evaluate LLMs

23:48 - Open Source vs. Closed Source Reasoning

25:03 - Dev’s Perspective on AGI

26:55 - Hype vs. Real Value in AI

30:25 - How Startups Are Redefining AI Development

30:39 - Book That Shaped Dev’s Thinking

31:53 - Is Predibase a Happy Organization?

32:25 - Closing Thoughts


Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Semenova explores how intelligent systems are built – and how they’re changing how we think, work, and live.

Sign up: Turing Post: https://www.turingpost.com


FOLLOW US

Devvret and Predibase:

https://devinthedetail.substack.com/

https://www.linkedin.com/company/predibase/

Ksenia and Turing Post:

https://x.com/TheTuringPost

https://www.linkedin.com/in/ksenia-se

https://huggingface.co/Kseniase

...more
View all episodesView all episodes
Download on the App Store

Inference by Turing PostBy Turing Post