
Sign up to save your podcasts
Or


What happens when millions of children start talking to LLMs every day and no one knows whether it's safe?
In this episode, Laurent Jolie sits down with Stéphie Herlin, co-founder and Research & Product Lead at KORA, the first independent, non-profit, open-source benchmark measuring how safe LLMs are for children. Before KORA, Stéphie spent 8+ years as a government economist, then moved into education as a policy analyst and teacher — spending nearly five years in a French public classroom with 6- to 10-year-olds while retraining in neuroscience, developmental science and pedagogy.
We talk about why education hasn't had its scientific revolution yet and what precision education could look like; Stéphie's earlier ed-tech project Brio and the tension between engagement-first investors and outcomes-first science; how KORA works (generating conversations between synthetic child profiles and real LLMs, judged against a taxonomy of 25 risks / 8 categories co-built with ~30 experts); the first results (average safety ~44%, ranging 13–78%, with some models regressing over time); why educational integrity is the industry's biggest blind spot (about a third of US kids use LLMs every day); a simple tip for parents (telling the model your child is a child improves safety by ~10 percentage points across every model tested); why LLMs still hallucinate ~20% of the time; and how any ed-tech builder can run KORA on their own conversational product today, for free.
Links mentionedKORA (the benchmark)
People mentioned
By Svenia Busson & Laurent JolieWhat happens when millions of children start talking to LLMs every day and no one knows whether it's safe?
In this episode, Laurent Jolie sits down with Stéphie Herlin, co-founder and Research & Product Lead at KORA, the first independent, non-profit, open-source benchmark measuring how safe LLMs are for children. Before KORA, Stéphie spent 8+ years as a government economist, then moved into education as a policy analyst and teacher — spending nearly five years in a French public classroom with 6- to 10-year-olds while retraining in neuroscience, developmental science and pedagogy.
We talk about why education hasn't had its scientific revolution yet and what precision education could look like; Stéphie's earlier ed-tech project Brio and the tension between engagement-first investors and outcomes-first science; how KORA works (generating conversations between synthetic child profiles and real LLMs, judged against a taxonomy of 25 risks / 8 categories co-built with ~30 experts); the first results (average safety ~44%, ranging 13–78%, with some models regressing over time); why educational integrity is the industry's biggest blind spot (about a third of US kids use LLMs every day); a simple tip for parents (telling the model your child is a child improves safety by ~10 percentage points across every model tested); why LLMs still hallucinate ~20% of the time; and how any ed-tech builder can run KORA on their own conversational product today, for free.
Links mentionedKORA (the benchmark)
People mentioned