
Sign up to save your podcasts
Or


Are we building educational technology faster than we can prove it actually works?
In this episode of Education Futures, Svenia Busson speaks with Natalia Kucirkova, professor at the University of Stavanger and co-founder of the International Centre for EdTech Impact.
Natalia has spent years researching how children learn, and how digital tools can (or cannot) support that process. Her work sits at the intersection of learning science, AI, and impact evaluation.
As AI tools rapidly enter classrooms, one question becomes critical:
Where is the evidence that these tools actually improve learning?
In this conversation, we explore:
• Why engagement is not the same as learning
• The risk of deploying AI tools without evidence of impact
• How EdTech companies can collaborate with researchers to design better products
• Why we need to slow down before scaling untested technologies
• The difference between efficacy, effectiveness, and real-world impact
• Why traditional evaluation methods (like RCTs) need to evolve in the age of AI
• How teachers and schools can make more informed, evidence-based choices
We also discuss concrete tools and initiatives aiming to bring more transparency to the field:
Natalia's Centre for Edtech Impact: https://foreduimpact.org/
AI safety benchmark Natalia has contributed to:
https://korabench.ai/
Edtech Certification Natalia is affiliated with & recommends: https://eduevidence.org/
Other Certifications that exist:
https://www.1edtech.org/
https://edtechimpact.com/
https://iste.org/edtech-product-selection
https://www.edtechtulna.org/
Natalia also explains how EdTech evaluation works in practice — from early-stage testing (A/B testing, rapid cycles) to large-scale studies like randomized controlled trials (RCTs).
The takeaway:
In education, good intentions and high engagement are not enough. If we want technology to truly support learning, we need to measure it well.
By Svenia Busson & Laurent JolieAre we building educational technology faster than we can prove it actually works?
In this episode of Education Futures, Svenia Busson speaks with Natalia Kucirkova, professor at the University of Stavanger and co-founder of the International Centre for EdTech Impact.
Natalia has spent years researching how children learn, and how digital tools can (or cannot) support that process. Her work sits at the intersection of learning science, AI, and impact evaluation.
As AI tools rapidly enter classrooms, one question becomes critical:
Where is the evidence that these tools actually improve learning?
In this conversation, we explore:
• Why engagement is not the same as learning
• The risk of deploying AI tools without evidence of impact
• How EdTech companies can collaborate with researchers to design better products
• Why we need to slow down before scaling untested technologies
• The difference between efficacy, effectiveness, and real-world impact
• Why traditional evaluation methods (like RCTs) need to evolve in the age of AI
• How teachers and schools can make more informed, evidence-based choices
We also discuss concrete tools and initiatives aiming to bring more transparency to the field:
Natalia's Centre for Edtech Impact: https://foreduimpact.org/
AI safety benchmark Natalia has contributed to:
https://korabench.ai/
Edtech Certification Natalia is affiliated with & recommends: https://eduevidence.org/
Other Certifications that exist:
https://www.1edtech.org/
https://edtechimpact.com/
https://iste.org/edtech-product-selection
https://www.edtechtulna.org/
Natalia also explains how EdTech evaluation works in practice — from early-stage testing (A/B testing, rapid cycles) to large-scale studies like randomized controlled trials (RCTs).
The takeaway:
In education, good intentions and high engagement are not enough. If we want technology to truly support learning, we need to measure it well.