
Sign up to save your podcasts
Or


RAG isn't just another AI buzzword, it's the architectural foundation that determines whether enterprise AI delivers value or burns budget. Eva Nahari, former Chief Product Officer at Vectara and four-year venture investor, explains why separating data from models matters more than the models themselves, and why 90% of AI implementations fail at the execution layer, not the technology layer.
The standard approach, dumping an 80-page PDF into a custom GPT, fails because accuracy requires proper data architecture, not better prompts. RAG addresses this by feeding models precise context rather than expecting them to ingest everything at once. But implementation creates new problems: multiple teams building isolated RAG systems across the same enterprise, creating governance nightmares when those hobby projects need to scale. The companies succeeding aren't the ones with the best AI talent, they're the ones who treated data management seriously before the AI hype arrived.
Topics Discussed:
RAG architecture separating data from models for compliance traceability
Retrieval quality as the primary bottleneck before generation accuracy
RAG sprawl problem from independent team implementations across enterprises
Real-time governance systems using guardian agents for multi-step workflows
Intent logging requirements for auditing agentic decision paths
Agent-in-the-loop pattern replacing human-in-the-loop for workflow efficiency
Documentation quality emerging as critical AI infrastructure investment
MCP standard adoption for cross-system data retrieval and access control
By Cadre AIRAG isn't just another AI buzzword, it's the architectural foundation that determines whether enterprise AI delivers value or burns budget. Eva Nahari, former Chief Product Officer at Vectara and four-year venture investor, explains why separating data from models matters more than the models themselves, and why 90% of AI implementations fail at the execution layer, not the technology layer.
The standard approach, dumping an 80-page PDF into a custom GPT, fails because accuracy requires proper data architecture, not better prompts. RAG addresses this by feeding models precise context rather than expecting them to ingest everything at once. But implementation creates new problems: multiple teams building isolated RAG systems across the same enterprise, creating governance nightmares when those hobby projects need to scale. The companies succeeding aren't the ones with the best AI talent, they're the ones who treated data management seriously before the AI hype arrived.
Topics Discussed:
RAG architecture separating data from models for compliance traceability
Retrieval quality as the primary bottleneck before generation accuracy
RAG sprawl problem from independent team implementations across enterprises
Real-time governance systems using guardian agents for multi-step workflows
Intent logging requirements for auditing agentic decision paths
Agent-in-the-loop pattern replacing human-in-the-loop for workflow efficiency
Documentation quality emerging as critical AI infrastructure investment
MCP standard adoption for cross-system data retrieval and access control