
Sign up to save your podcasts
Or
Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses.
In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset for their customers, and hybrid search models that are using keywords and embeds.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty
Show Notes:
(0:00) Introduction to Edo and Pinecone
(2:01) Use cases for Pinecone and RAG models
(6:02) Corporate internal uses for syntax search
(10:13) Removing the limits of RAG with Canopy
(14:02) Hybrid search
(16:51) Why keep Pinecone closed source
(22:29) Infinite context
(23:11) Embeddings and data leakage
(25:35) Fine tuning the data set
(27:33) What’s next for Pinecone
(28:58) Separating reasoning and knowledge in AI
4.4
114114 ratings
Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses.
In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset for their customers, and hybrid search models that are using keywords and embeds.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty
Show Notes:
(0:00) Introduction to Edo and Pinecone
(2:01) Use cases for Pinecone and RAG models
(6:02) Corporate internal uses for syntax search
(10:13) Removing the limits of RAG with Canopy
(14:02) Hybrid search
(16:51) Why keep Pinecone closed source
(22:29) Infinite context
(23:11) Embeddings and data leakage
(25:35) Fine tuning the data set
(27:33) What’s next for Pinecone
(28:58) Separating reasoning and knowledge in AI
1,273 Listeners
1,040 Listeners
519 Listeners
441 Listeners
192 Listeners
88 Listeners
426 Listeners
50 Listeners
75 Listeners
135 Listeners
461 Listeners
31 Listeners
22 Listeners
43 Listeners
35 Listeners