
Sign up to save your podcasts
Or


Welcome to our first episode of 2026. In this heavy-hitting season opener, hosts Dan and Ray are joined by Dr. Mark Bassett, Academic Lead for AI at Charles Sturt University and a "superhero" of AI activism. Mark's an ally in our long standing mantra on the podcast, as we know you've got tired of hearing just Dan and Ray say "AI detectors don't work".
Dr. Bassett breaks down his landmark paper, "Heads We Win, Tails You Lose: AI Detectors in Education" which we describe (hopefully) as the final 'silver nail in the coffin' for detection software. We move past the surface-level "they don't work" argument and dive into the legal, ethical, and systemic risks universities face by relying on "black box" algorithms. Mark compares current AI detection to using a deck of tarot cards to determine a student's future - arguing that these tools have no place in a fair academic integrity process.
We also explore the S.E.C.U.R.E. framework, a tool-agnostic approach to integrating AI into education safely. If you're an educator, student, or leader wondering how to move from suspicion to capability-building, this is the blueprint you've been waiting for.
LinksThe Research Paper: Heads We Win, Tails You Lose: AI Detectors in Education
The Framework: The SECURE Framework for AI Integration
Find Mark Bassett online via his website and LinkedIn
Referenced Study: University of Reading's "Turing Test" paper on AI in Exams
By Dan Bowen and Ray Fleming3.4
1010 ratings
Welcome to our first episode of 2026. In this heavy-hitting season opener, hosts Dan and Ray are joined by Dr. Mark Bassett, Academic Lead for AI at Charles Sturt University and a "superhero" of AI activism. Mark's an ally in our long standing mantra on the podcast, as we know you've got tired of hearing just Dan and Ray say "AI detectors don't work".
Dr. Bassett breaks down his landmark paper, "Heads We Win, Tails You Lose: AI Detectors in Education" which we describe (hopefully) as the final 'silver nail in the coffin' for detection software. We move past the surface-level "they don't work" argument and dive into the legal, ethical, and systemic risks universities face by relying on "black box" algorithms. Mark compares current AI detection to using a deck of tarot cards to determine a student's future - arguing that these tools have no place in a fair academic integrity process.
We also explore the S.E.C.U.R.E. framework, a tool-agnostic approach to integrating AI into education safely. If you're an educator, student, or leader wondering how to move from suspicion to capability-building, this is the blueprint you've been waiting for.
LinksThe Research Paper: Heads We Win, Tails You Lose: AI Detectors in Education
The Framework: The SECURE Framework for AI Integration
Find Mark Bassett online via his website and LinkedIn
Referenced Study: University of Reading's "Turing Test" paper on AI in Exams

888 Listeners

8,862 Listeners

9,135 Listeners

209 Listeners

1,041 Listeners

1,618 Listeners

200 Listeners

29,337 Listeners

3,548 Listeners

207 Listeners

229 Listeners

657 Listeners

108 Listeners

42 Listeners

21 Listeners