
Sign up to save your podcasts
Or
Why are AI models so biased? Whether it's ChatGPT or an AI image generator, LLMs often have certain biases and tendencies. Nick Schmidt, Founder & CTO of SolasAI & BLDS, LLC, joins us to discuss how to understand and fix biased AI.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Nick and Jordan questions about AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Timestamps:
[00:01:20] Daily AI news
[00:04:00] About Nick and Solas AI
[00:07:14] Algorithm misuse can lead to discrimination
[00:11:54] 3-step burden shifting process to address discrimination
[00:14:18] Internet usage leads to biased data collection
[00:17:30] AI bias, accessibility, and user control insights
[00:22:59] Algorithm fairness through regulations
[00:26:16] Algorithmic decisioning and human biases
[00:27:32] How to address biases in AI models?
Topics Covered in This Episode:
1. Prevalence of Bias in AI Models
2. Detection and Mitigation of Bias in Algorithms
3. Practical Solutions for Addressing Bias in AI
Keywords:
AI bias, discrimination, image generators, language models, input data, burden shifting process, biased information, societal biases, fairness, exclusion, collective punishment, biased AI, practical advice, best practices, everyday users, legal framework, AI news, smart devices, NVIDIA, animated films, detection, mitigation, discriminatory outcomes, generative AI, model development, algorithmic decision-making, dynamic models, reinforcement, algorithmic fairness, Solas AI, newsletter, daily AI
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
4.7
7979 ratings
Why are AI models so biased? Whether it's ChatGPT or an AI image generator, LLMs often have certain biases and tendencies. Nick Schmidt, Founder & CTO of SolasAI & BLDS, LLC, joins us to discuss how to understand and fix biased AI.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Nick and Jordan questions about AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Timestamps:
[00:01:20] Daily AI news
[00:04:00] About Nick and Solas AI
[00:07:14] Algorithm misuse can lead to discrimination
[00:11:54] 3-step burden shifting process to address discrimination
[00:14:18] Internet usage leads to biased data collection
[00:17:30] AI bias, accessibility, and user control insights
[00:22:59] Algorithm fairness through regulations
[00:26:16] Algorithmic decisioning and human biases
[00:27:32] How to address biases in AI models?
Topics Covered in This Episode:
1. Prevalence of Bias in AI Models
2. Detection and Mitigation of Bias in Algorithms
3. Practical Solutions for Addressing Bias in AI
Keywords:
AI bias, discrimination, image generators, language models, input data, burden shifting process, biased information, societal biases, fairness, exclusion, collective punishment, biased AI, practical advice, best practices, everyday users, legal framework, AI news, smart devices, NVIDIA, animated films, detection, mitigation, discriminatory outcomes, generative AI, model development, algorithmic decision-making, dynamic models, reinforcement, algorithmic fairness, Solas AI, newsletter, daily AI
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
323 Listeners
145 Listeners
190 Listeners
288 Listeners
172 Listeners
127 Listeners
141 Listeners
67 Listeners
199 Listeners
54 Listeners
456 Listeners
30 Listeners
54 Listeners
39 Listeners
62 Listeners