In-Ear Insights from Trust Insights

In-Ear Insights: Responsible AI Part 2, Managing Bias


Listen Later

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle the issue of bias in AI, particularly managing bias in large language models. Discover real-life examples of how bias manifests and the potential consequences for businesses, including reputational damage and the reinforcement of harmful stereotypes. You will learn about a critical framework for responsible AI development and deployment, emphasizing the importance of accountability, fairness, and transparency. Tune in to gain actionable strategies for mitigating bias in AI and promoting ethical practices within your organization.

Watch the video here:

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

https://traffic.libsyn.com/inearinsights/tipodcast-responsible-ai-part-2-bias.mp3

Download the MP3 audio here.

  • Need help with your company’s data and analytics? Let us know!
  • Join our free Slack group for marketers interested in analytics!
  • [podcastsponsor]

    Machine-Generated Transcript

    What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

    Christopher S. Penn – 00:00

    In this week’s In-Ear Insights, this is part two of our series on responsible AI. This time, we’re going to talk about—particularly one of the biggest problems of generative AI, one of the biggest risks for our companies—is bias. Bias in the way AI systems and models work, bias in how they produce outputs. And, Katie, actually, just very recently, you were working on some content with Kelsey for our website using the generative models, and it spit out some stuff that was fairly sexist, at least from what I heard. So, do you want to talk about what happened there?

    Katie Robbert – 00:39

    Yeah. It’s no secret that we use large language models to assist with our writing. We’re a small company, but we’ve done a lot of work to create custom large language models based on our previous writing, based on our data. So we’re not just trying to pull something generic from any old open model. What happened, though? Because it was supposed to be writing as me, Katie Robbert, with a feminine name, the language model recognized Katie as a woman. And so, one of the things that I—Katie, in real life—do in my writing is I give anecdotes and examples to really help the content resonate with people.

    Katie Robbert – 01:27

    And so what the large language model did was it made decisions on its own about including anecdotes and examples, and everything it included was about fashion, shoe shopping, closets, going to the mall, keeping up on the latest clothing trends, which anyone who knows me knows that I basically dress one step above homeless. And so those are not things that are important to me, nor are they examples that I have ever given in my writing. I’ve not talked about cleaning out my closet to fit in the latest designer fashions.

    It’s kind of disappointing that the large language model—despite having probably years’ worth of examples of my writing with no examples of those specific anecdotes—made those choices and assumptions about me because I have a feminine name, that those would be things

    ...more
    View all episodesView all episodes
    Download on the App Store

    In-Ear Insights from Trust InsightsBy Trust Insights

    • 5
    • 5
    • 5
    • 5
    • 5

    5

    9 ratings


    More shows like In-Ear Insights from Trust Insights

    View all
    The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

    The Artificial Intelligence Show

    171 Listeners

    AI Security Podcast by Kaizenteq Team

    AI Security Podcast

    4 Listeners