
Sign up to save your podcasts
Or
Hey everyone! Thank you so much for watching the 112th episode of the Weaviate Podcast! This is another super exciting one, diving into the release of the Vertex AI RAG Engine, its integration with Weaviate and thoughts on the future of connecting AI systems with knowledge sources! The podcast begins by reflecting on Bob's experience speaking at Google in 2016 on Knowledge Graphs! This transitions into discussing the evolution of knowledge representation perspectives and things like the semantic web, ontologies, search indexes, and data warehouses. This then leads to discussing how much knowledge is encoded in the prompts themselves and the resurrection of rule-based systems with LLMs! The podcast transitions back to topics around the modern consensus in RAG pipeline engineering. Lewis suggests that parsing in data ingestion is the biggest bottleneck and low hanging fruit to fix. Bob presents the re-indexing problem and how it is additionally complicated with embedding models! Discussing the state of knowledge representation systems inspired me to ask Bob further about his vision with Generative Feedback Loops and controlling databases with LLMs, How open ended will this be? We then discuss the role that Agentic Architectures and Compound AI Systems are having on the state of AI. What is the right way to connect prompts with other prompts, external tools, and agents? The podcast then concludes by discussing a really interesting emerging pattern in the deployment of RAG systems. Whereas the first generation of RAG systems typically were user facing, such as customer support chatbots, the next generation is more API-based. The launch of the Vertex AI RAG Engine quickly shows you how to use RAG Engine as a tool for a Gemini Agent!
4
44 ratings
Hey everyone! Thank you so much for watching the 112th episode of the Weaviate Podcast! This is another super exciting one, diving into the release of the Vertex AI RAG Engine, its integration with Weaviate and thoughts on the future of connecting AI systems with knowledge sources! The podcast begins by reflecting on Bob's experience speaking at Google in 2016 on Knowledge Graphs! This transitions into discussing the evolution of knowledge representation perspectives and things like the semantic web, ontologies, search indexes, and data warehouses. This then leads to discussing how much knowledge is encoded in the prompts themselves and the resurrection of rule-based systems with LLMs! The podcast transitions back to topics around the modern consensus in RAG pipeline engineering. Lewis suggests that parsing in data ingestion is the biggest bottleneck and low hanging fruit to fix. Bob presents the re-indexing problem and how it is additionally complicated with embedding models! Discussing the state of knowledge representation systems inspired me to ask Bob further about his vision with Generative Feedback Loops and controlling databases with LLMs, How open ended will this be? We then discuss the role that Agentic Architectures and Compound AI Systems are having on the state of AI. What is the right way to connect prompts with other prompts, external tools, and agents? The podcast then concludes by discussing a really interesting emerging pattern in the deployment of RAG systems. Whereas the first generation of RAG systems typically were user facing, such as customer support chatbots, the next generation is more API-based. The launch of the Vertex AI RAG Engine quickly shows you how to use RAG Engine as a tool for a Gemini Agent!
1,008 Listeners
475 Listeners
525 Listeners
439 Listeners
295 Listeners
214 Listeners
2,616 Listeners
271 Listeners
8,385 Listeners
92 Listeners
315 Listeners
106 Listeners
70 Listeners
397 Listeners