Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8
In this episode, Simon Eskildsen, co-founder and CEO of TurboPuffer, lays out a compelling vision for how AI-native infrastructure needs to evolve in an era where every application wants to connect massive amounts of context to large language models. He breaks down why traditional databases and even large context windows fall short—especially at scale—and why object-storage-native search is the inevitable next step. Drawing on his experience from Shopify and Readwise, Simon introduces the SCRAP framework to explain the limits of context stuffing and makes a clear case for why cost, recall, performance, and access control drive the need for smarter retrieval systems. From practical lessons in building highly reliable infra to hard technical problems in vector indexing, this conversation distills the future of AI infra into first principles—with clarity and depth.
[0:00] Intro
[0:49] The Evolution of AI Context Windows
[2:32] Challenges in AI Data Integration
[3:56] SCRAP: Scale, Cost, Recall, ACLs, and Performance
[9:21] The Rise of Object-Oriented Storage
[16:47] Turbo Puffer Use Cases
[22:32] Challenges in Vector Search
[27:02] Challenges in Query Planning and Data Filtering
[27:53] Focusing on Core Problems and Simplicity
[28:28] Customer Feedback and Future Directions
[29:11] Reliability and Simplicity in Design
[30:39] Evaluating Embedding Models and Search Performance
[32:17] The Role of Vectors in Search Engines
[34:16] Balancing Focus and Expansion
[35:57] AI Infrastructure and Market Trends
[38:36] The Future of Memory in AI
[43:01] Table Stakes for AI in SaaS Applications
[45:55] Multimodal Data and Market Observations
[46:57] Quickfire
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint