The Pluralsight Podcast

AI Ethics, Bias, and Responsible Innovation | Kesha Williams


Listen Later

What happens when the data you feed an AI system is already broken — and no one stops to ask why?

In this episode of The Pluralsight Podcast, Kesha Williams — AI ethicist, AWS Hero, and 30-year tech veteran — makes the case that building powerful AI systems isn't enough. Building responsible ones is the only real standard that matters.

Kesha traces her focus on AI ethics back to a single project: a crime prediction model that exposed how easily biased data can corrupt a machine learning system before a single line of code is written. From there, she breaks down the three types of bias teams face — data, algorithmic, and interpretation — why interpretation bias is the one most teams are still getting wrong, and what model drift means for organizations that think their work is done once a model ships.

We also get into AI governance in the age of agents, why the ability to roll back an AI action may be the most underrated capability in any AI stack, and what an AI Center of Excellence actually looks like in practice.

If you're building AI systems — or leading teams that do — this conversation is a practical and honest look at where things go wrong, and what it actually takes to get them right.

Chapters:

00:00:33 — Introduction: Kesha Williams, AWS AI Hero

00:01:05 — Kesha's 30-year journey and spotting emerging tech early

00:02:51 — The moment that changed everything: building a crime prediction model

00:04:18 — Pre-crime, Minority Report, and bias hiding in UK stop-and-search data

00:05:44 — The Clear News AI case study: how bias shapes what a nation reads

00:07:57 — The three types of bias — and why interpretation bias is now the hardest

00:09:16 — Role play: interpretation bias and the home loan example

00:11:53 — Red flags: why skipping model retraining silently reintroduces bias

00:13:21 — Favorite tools: SageMaker Clarify, AI Fairness 360, and Fairlearn

00:14:22 — SHAP and LIME: making model decisions explainable

00:15:28 — Agentic AI governance: visibility, guardrails, and rollback

00:18:09 — Accountability and the case for an AI Center of Excellence

00:20:53 — Skills engineers need to prioritize: prompt engineering and LLM literacy

00:22:37 — The mindset of learners who thrive: curiosity and innovation

00:24:32 — No-code platforms, citizen developers, and guardrails

00:25:28 — Where to find Kesha: LinkedIn and Pluralsight

Want more insights on AI, security, and cloud? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

Connect with Kesha Williams on LinkedIn: https://www.linkedin.com/in/keshaewilliams/

Questions or comments? [email protected]

www.pluralsight.com

...more
View all episodesView all episodes
Download on the App Store

The Pluralsight PodcastBy Josh Burkhead