
Sign up to save your podcasts
Or


A quiet shift has happened in classrooms: the first shock of AI faded, and what remains is a constant hum shaping how kids learn, talk and play. AI takes on a sweeping premortem from Brookings built on 500 interviews across 50 countries and ask the uncomfortable question: if we keep going as we are, what fails first?
TLDR / At A Glance:
We start with the blurred learner persona, where Snapchat banter, dating advice and maths help happen in the same chat window. Parents sit in the crossfire: some see AI as a ladder to opportunity, others as an always-on babysitter, and almost none get real literacy support.
Against that, the equity story shines. Girls in Afghanistan, barred from school, use WhatsApp and AI to study physics and grade their work. Teachers save planning time, and the benefits become real when those minutes are reinvested in human connection.
Accessibility advances matter too, from dynamic text support for dyslexia to voice banking that restores identity and chatbots as safe practice partners for autistic students.
Then we confront the great unwiring. Cognitive offloading turns into cognitive debt when the model thinks for you. Admissions essays show it clearly: human work scatters with originality; AI-assisted writing clusters into clean sameness. The joy of wrestling with ideas shrinks to checklists. The emotional frontier looks riskier still. Companion bots simulate empathy, create frictionless “relationships,” and nudge feelings in ways users don’t notice. With dark patterns and staggering tracking, teens face a surveillance ecosystem that strips their inner life for data.
There is a way to bend the arc: prosper, prepare, protect. We advocate assignments where AI is scaffold, not surrogate, demanding human synthesis and transparency. We push for real AI literacy how models work, why they hallucinate, what data they extract and treating outputs like claims to test, not answers to accept. And we press for protection by design: sandboxed education tools, strict data minimisation, transparent audits and a ban on manipulative features.
If education optimises only for speed, machines will win. We choose to protect what makes learners human: empathy, critical thinking and the resilience to struggle with hard problems.
Subscribe and share your take - what human skill should schools defend first?
Link to research: A-New-Direction-for-Students-in-an-AI-World-FULL-REPORT.pdf
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
By Kieran GilmurrayA quiet shift has happened in classrooms: the first shock of AI faded, and what remains is a constant hum shaping how kids learn, talk and play. AI takes on a sweeping premortem from Brookings built on 500 interviews across 50 countries and ask the uncomfortable question: if we keep going as we are, what fails first?
TLDR / At A Glance:
We start with the blurred learner persona, where Snapchat banter, dating advice and maths help happen in the same chat window. Parents sit in the crossfire: some see AI as a ladder to opportunity, others as an always-on babysitter, and almost none get real literacy support.
Against that, the equity story shines. Girls in Afghanistan, barred from school, use WhatsApp and AI to study physics and grade their work. Teachers save planning time, and the benefits become real when those minutes are reinvested in human connection.
Accessibility advances matter too, from dynamic text support for dyslexia to voice banking that restores identity and chatbots as safe practice partners for autistic students.
Then we confront the great unwiring. Cognitive offloading turns into cognitive debt when the model thinks for you. Admissions essays show it clearly: human work scatters with originality; AI-assisted writing clusters into clean sameness. The joy of wrestling with ideas shrinks to checklists. The emotional frontier looks riskier still. Companion bots simulate empathy, create frictionless “relationships,” and nudge feelings in ways users don’t notice. With dark patterns and staggering tracking, teens face a surveillance ecosystem that strips their inner life for data.
There is a way to bend the arc: prosper, prepare, protect. We advocate assignments where AI is scaffold, not surrogate, demanding human synthesis and transparency. We push for real AI literacy how models work, why they hallucinate, what data they extract and treating outputs like claims to test, not answers to accept. And we press for protection by design: sandboxed education tools, strict data minimisation, transparent audits and a ban on manipulative features.
If education optimises only for speed, machines will win. We choose to protect what makes learners human: empathy, critical thinking and the resilience to struggle with hard problems.
Subscribe and share your take - what human skill should schools defend first?
Link to research: A-New-Direction-for-Students-in-an-AI-World-FULL-REPORT.pdf
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK