
Sign up to save your podcasts
Or


One of the scariest parts of AI? 😰
Who (or what) gets left out.
As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.
Which is almost always an absolutely terrible thing.
So, who gets written out of the AI future? And how do we fix it?
Join us to find out.
Newsletter: Sign up for our free daily newsletter
More on this Episode:Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
Timestamps:
00:00 "AI Reliance and Ethical Risks"
03:37 "Inclusion in AI Conversations"
06:32 "Shared Responsibility for AI Change"
11:03 "AI Bias Against Black Hairstyles"
15:02 "Growing Businesses with Generative AI"
16:21 "For Us, By Us"
20:07 Preventing AI Echo Chambers
25:14 "Rethinking Leadership and AI Use"
26:46 "Everyday AI Wrap-Up"
Keywords:
AI bias, large language models, marginalized voices in AI, representation in AI, diversity in AI, AI and identity, technology and power, algorithmic bias, training data bias, cultural competence in AI, AI exclusion, social media moderation algorithms, biased AI moderation, racial bias in AI, gender bias in AI, queer representation in AI, trans representation in technology, working class and AI, age bias in AI, responsible AI use, AI content creation, AI slop, human in the loop, human-centered AI, ethical AI, trust in AI, AI and creativity, AI echo chambers, personalization in AI models, AI-generated content, voice amplification in technology, AI-powered surveillance, inverse surveillance, AI leadership, tech activism, AI for social good, AI media trust, challenge in AI adoption, AI community guidelines, inclusion in technology, future of AI representation, multi-agent orchestration, responsible AI auditing, traini
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Start Here ▶️
Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com
By Everyday AI4.8
9595 ratings
One of the scariest parts of AI? 😰
Who (or what) gets left out.
As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.
Which is almost always an absolutely terrible thing.
So, who gets written out of the AI future? And how do we fix it?
Join us to find out.
Newsletter: Sign up for our free daily newsletter
More on this Episode:Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
Timestamps:
00:00 "AI Reliance and Ethical Risks"
03:37 "Inclusion in AI Conversations"
06:32 "Shared Responsibility for AI Change"
11:03 "AI Bias Against Black Hairstyles"
15:02 "Growing Businesses with Generative AI"
16:21 "For Us, By Us"
20:07 Preventing AI Echo Chambers
25:14 "Rethinking Leadership and AI Use"
26:46 "Everyday AI Wrap-Up"
Keywords:
AI bias, large language models, marginalized voices in AI, representation in AI, diversity in AI, AI and identity, technology and power, algorithmic bias, training data bias, cultural competence in AI, AI exclusion, social media moderation algorithms, biased AI moderation, racial bias in AI, gender bias in AI, queer representation in AI, trans representation in technology, working class and AI, age bias in AI, responsible AI use, AI content creation, AI slop, human in the loop, human-centered AI, ethical AI, trust in AI, AI and creativity, AI echo chambers, personalization in AI models, AI-generated content, voice amplification in technology, AI-powered surveillance, inverse surveillance, AI leadership, tech activism, AI for social good, AI media trust, challenge in AI adoption, AI community guidelines, inclusion in technology, future of AI representation, multi-agent orchestration, responsible AI auditing, traini
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Start Here ▶️
Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com

348 Listeners

157 Listeners

210 Listeners

208 Listeners

158 Listeners

228 Listeners

654 Listeners

281 Listeners

59 Listeners

90 Listeners

48 Listeners

145 Listeners

59 Listeners

61 Listeners

21 Listeners