
Sign up to save your podcasts
Or


Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses.
In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset for their customers, and hybrid search models that are using keywords and embeds.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty
Show Notes:
(0:00) Introduction to Edo and Pinecone
(2:01) Use cases for Pinecone and RAG models
(6:02) Corporate internal uses for syntax search
(10:13) Removing the limits of RAG with Canopy
(14:02) Hybrid search
(16:51) Why keep Pinecone closed source
(22:29) Infinite context
(23:11) Embeddings and data leakage
(25:35) Fine tuning the data set
(27:33) What’s next for Pinecone
(28:58) Separating reasoning and knowledge in AI
By Conviction4.3
124124 ratings
Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses.
In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset for their customers, and hybrid search models that are using keywords and embeds.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty
Show Notes:
(0:00) Introduction to Edo and Pinecone
(2:01) Use cases for Pinecone and RAG models
(6:02) Corporate internal uses for syntax search
(10:13) Removing the limits of RAG with Canopy
(14:02) Hybrid search
(16:51) Why keep Pinecone closed source
(22:29) Infinite context
(23:11) Embeddings and data leakage
(25:35) Fine tuning the data set
(27:33) What’s next for Pinecone
(28:58) Separating reasoning and knowledge in AI

1,296 Listeners

536 Listeners

1,105 Listeners

443 Listeners

2,342 Listeners

233 Listeners

212 Listeners

313 Listeners

101 Listeners

551 Listeners

101 Listeners

475 Listeners

34 Listeners

140 Listeners

42 Listeners