
Sign up to save your podcasts
Or


What if the real breakthrough in AI isn't the model itself, but the data that gives it knowledge? In this episode of Tech Talks Daily, I sit down with Edo Liberty, founder and Chief Scientist of Pinecone, to unpack how vector databases have quietly become the backbone of modern AI infrastructure.
We explore why retrieval-augmented generation (RAG) works so effectively out of the box, and why fine-tuning large models often adds complexity without real-world value. Edo shares how Pinecone's research revealed that different models—from OpenAI to Anthropic—require differently structured context to perform well, a discovery that's reshaping how enterprises think about AI implementation.
As the former Director of Research at Yahoo and AWS, Edo offers a grounded perspective on where the real innovation is happening. He explains how the shift from traditional data structures to vector representations is redefining how machines "know" and retrieve information, creating smarter, context-aware systems.
We also touch on his recent transition to Chief Scientist, his excitement for returning to hands-on research, and why he believes the convergence of AI and data represents the defining technological shift of our lifetime.
So, what does it mean for developers, business leaders, and anyone building with AI when knowledge becomes an accessible layer of infrastructure? Can we build systems that truly "know" as humans do?
Join the conversation, and after listening, I'd love to hear your thoughts—do you think the future of AI lies in the models or in the data that feeds them?
By Neil C. Hughes5
198198 ratings
What if the real breakthrough in AI isn't the model itself, but the data that gives it knowledge? In this episode of Tech Talks Daily, I sit down with Edo Liberty, founder and Chief Scientist of Pinecone, to unpack how vector databases have quietly become the backbone of modern AI infrastructure.
We explore why retrieval-augmented generation (RAG) works so effectively out of the box, and why fine-tuning large models often adds complexity without real-world value. Edo shares how Pinecone's research revealed that different models—from OpenAI to Anthropic—require differently structured context to perform well, a discovery that's reshaping how enterprises think about AI implementation.
As the former Director of Research at Yahoo and AWS, Edo offers a grounded perspective on where the real innovation is happening. He explains how the shift from traditional data structures to vector representations is redefining how machines "know" and retrieve information, creating smarter, context-aware systems.
We also touch on his recent transition to Chief Scientist, his excitement for returning to hands-on research, and why he believes the convergence of AI and data represents the defining technological shift of our lifetime.
So, what does it mean for developers, business leaders, and anyone building with AI when knowledge becomes an accessible layer of infrastructure? Can we build systems that truly "know" as humans do?
Join the conversation, and after listening, I'd love to hear your thoughts—do you think the future of AI lies in the models or in the data that feeds them?

1,287 Listeners

537 Listeners

1,640 Listeners

1,090 Listeners

164 Listeners

111 Listeners

303 Listeners

334 Listeners

269 Listeners

207 Listeners

9,920 Listeners

5,509 Listeners

349 Listeners

93 Listeners

608 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners