
Sign up to save your podcasts
Or


James Dooley is joined by Sergey Lucktinov to explain how large language models retrieve information during AI searches. They break down the full retrieval pipeline, from metadata-only filtering to light skimming and deep page parsing. The discussion clarifies when LLMs rely on meta titles and descriptions, when pages are never opened, how schema markup is interpreted, and how knowledge vault answers bypass search entirely. This episode gives SEOs and marketers a clear framework for optimising content to survive each LLM retrieval stage.
By James DooleyJames Dooley is joined by Sergey Lucktinov to explain how large language models retrieve information during AI searches. They break down the full retrieval pipeline, from metadata-only filtering to light skimming and deep page parsing. The discussion clarifies when LLMs rely on meta titles and descriptions, when pages are never opened, how schema markup is interpreted, and how knowledge vault answers bypass search entirely. This episode gives SEOs and marketers a clear framework for optimising content to survive each LLM retrieval stage.