
Sign up to save your podcasts
Or


AI forgets everything past its context window. And when it forgets, it starts making stuff up. I built SOKK, a career assistant that dropped hallucination from 24% to under 3% using dual-gate verification. Then I built UOCE, a context engine that reduced token usage by 67-95%. This episode breaks down the architecture, the research backing it, and what I'd do differently. If you work with LLMs, this might change how you think about memory.
By Steve OakAI forgets everything past its context window. And when it forgets, it starts making stuff up. I built SOKK, a career assistant that dropped hallucination from 24% to under 3% using dual-gate verification. Then I built UOCE, a context engine that reduced token usage by 67-95%. This episode breaks down the architecture, the research backing it, and what I'd do differently. If you work with LLMs, this might change how you think about memory.