
Sign up to save your podcasts
Or


What comes first in your mind when you hear "AI and ethics"?
For Mark, it's a conversation with his teenage son about driverless cars choosing who to hurt in an accident. For Stephan, it's data privacy and the question of whether we really have a choice about what we share. For Niko, it's the haunting question: when AI makes the decision, who's responsible?
Niko anchors a conversation that quickly moves from sci-fi thought experiments to the uncomfortable reality—ethical AI decisions are happening every few minutes in our lives, and we're barely prepared. Joining him are Mark (reflecting on how fast this snuck up on us) and Stephan (bringing systems thinking about data, privacy, and the gap between what organizations should do and what governments are actually doing).
From Philosophy to Practice
The Consent Illusion
Starting Conversations Without Creating Paralysis
Who's Actually Accountable?
When Niko asks for one takeaway, Mark channels Mark Twain: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so. My question to you is, what do you know about AI and ethics?"
Stephan reflects: "AI is reflecting the best and the worst of our own humanity, forcing us to decide which version of ourselves we want to encode into the future."
Niko's closing: "Ethics is a socio-political responsibility"—not compliance theater, not corporate governance alone, but something we carry as parents, neighbors, humans.
This episode doesn't provide answers—it surfaces the questions practitioners should be sitting with. Not the distant sci-fi dilemmas, but the ethical decisions happening in your organization right now, every few minutes, while you're too busy to notice.
By Stephan Neck, Niko Kaintantzis, Ali Hajou, Mark RichardsWhat comes first in your mind when you hear "AI and ethics"?
For Mark, it's a conversation with his teenage son about driverless cars choosing who to hurt in an accident. For Stephan, it's data privacy and the question of whether we really have a choice about what we share. For Niko, it's the haunting question: when AI makes the decision, who's responsible?
Niko anchors a conversation that quickly moves from sci-fi thought experiments to the uncomfortable reality—ethical AI decisions are happening every few minutes in our lives, and we're barely prepared. Joining him are Mark (reflecting on how fast this snuck up on us) and Stephan (bringing systems thinking about data, privacy, and the gap between what organizations should do and what governments are actually doing).
From Philosophy to Practice
The Consent Illusion
Starting Conversations Without Creating Paralysis
Who's Actually Accountable?
When Niko asks for one takeaway, Mark channels Mark Twain: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so. My question to you is, what do you know about AI and ethics?"
Stephan reflects: "AI is reflecting the best and the worst of our own humanity, forcing us to decide which version of ourselves we want to encode into the future."
Niko's closing: "Ethics is a socio-political responsibility"—not compliance theater, not corporate governance alone, but something we carry as parents, neighbors, humans.
This episode doesn't provide answers—it surfaces the questions practitioners should be sitting with. Not the distant sci-fi dilemmas, but the ethical decisions happening in your organization right now, every few minutes, while you're too busy to notice.