In-Ear Insights from Trust Insights

In-Ear Insights: Responsible AI Part 3, Data Privacy


Listen Later

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the importance of data privacy when using generative AI for marketing. You’ll discover the risks of blindly trusting AI vendors and the critical role of transparency in building trust with your audience. Learn why a robust AI responsibility plan is not just about legal compliance but also about protecting your brand reputation and ensuring the long-term success of your marketing efforts. Discover actionable steps for evaluating AI vendors and establishing clear accountability within your organization to mitigate potential risks associated with data privacy.

Watch the video here:

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

https://traffic.libsyn.com/inearinsights/tipodcast-responsible-ai-part-3-data-privacy.mp3

Download the MP3 audio here.

  • Need help with your company’s data and analytics? Let us know!
  • Join our free Slack group for marketers interested in analytics!
  • [podcastsponsor]

    Machine-Generated Transcript

    What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

    Christopher Penn – 00:00

    In this week’s In-Ear Insights, this is part three of our series on responsible AI. Today, we want to talk about one that a lot of people have questions on. This is all about keeping our data safe, protecting sensitive information, and building trust. One of the things that I’ve said for a while is transparency is the currency of trust. The more you can show people what’s happening to them, to their data, etcetera, and how you’re using it, the more easily you can build trust. So, Katie, when you think about the use of generative AI and the questions people have about what’s going on with my data, what have you seen and heard?

    Katie Robbert – 00:49

    I mean, the things I’ve seen. For those who don’t know, I actually started my career in a highly regulated space where I could at one point recite all 128 encrypted characters of HIPAA because it was such an important part of the work that we did—data privacy. And so you have HIPAA, you have COPPA, you have CCPA, you have GDPR—you have all of these regulations meant to protect someone’s personal information against the internet because it’s ours. We get to choose what we do with it. The challenge that I’ve seen in general is that companies are so hungry to get their hands on that information so that they can target individuals down to their specific behaviors and purchasing patterns. Companies that do that are just irresponsible.

    Katie Robbert – 01:52

    When you talk about using generative AI, there is a lack of education and understanding about how to use AI responsibly to then also protect. So let’s say you work at a company that does actually deal with personally identifiable information. Let’s say that is what you do and you’re looking to use generative AI. Unfortunately, mistakes happen, and people will sometimes take that PII and put it into an open model—being something that I pull up Google Gemini.com and then I just start giving it my data. That’s an

    ...more
    View all episodesView all episodes
    Download on the App Store

    In-Ear Insights from Trust InsightsBy Trust Insights

    • 5
    • 5
    • 5
    • 5
    • 5

    5

    9 ratings


    More shows like In-Ear Insights from Trust Insights

    View all
    The Artificial Intelligence Show by Paul Roetzer and Mike Kaput

    The Artificial Intelligence Show

    171 Listeners

    AI Security Podcast by Kaizenteq Team

    AI Security Podcast

    4 Listeners