The Tinker Table

Episode 3: When AI gets it Wrong


Listen Later

When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


We explore real-world case studies like:


The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


🔍 Sources & Further Reading:


Facial recognition misidentification

Machine bias

Predictive policing

AI hallucinations


🎧 Tune in to learn why we need more than innovation—we need accountability.

...more
View all episodesView all episodes
Download on the App Store

The Tinker TableBy Hannah Lloyd