She Said Privacy/He Said Security

Real AI Risks No One Wants To Talk About And What Companies Can Do About Them


Listen Later

Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies.

In this episode…

AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools? 

Managing AI risk requires governance and the ability to test AI tools before deploying them. That’s why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it’s important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.

...more
View all episodesView all episodes
Download on the App Store

She Said Privacy/He Said SecurityBy Jodi and Justin Daniels

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

12 ratings


More shows like She Said Privacy/He Said Security

View all
This American Life by This American Life

This American Life

91,138 Listeners

Criminal by Vox Media Podcast Network

Criminal

37,466 Listeners

Hidden Brain by Hidden Brain, Shankar Vedantam

Hidden Brain

43,663 Listeners

Pivot by New York Magazine

Pivot

9,520 Listeners

The Privacy Advisor Podcast by Jedidiah Bracy, IAPP Editorial Director

The Privacy Advisor Podcast

65 Listeners

Christopher Kimball’s Milk Street Radio by Milk Street Radio

Christopher Kimball’s Milk Street Radio

2,981 Listeners

The Daily by The New York Times

The Daily

112,329 Listeners

Up First from NPR by NPR

Up First from NPR

56,444 Listeners

Serious Privacy by Dr. K Royal, Paul Breitbarth & Ralph O'Brien

Serious Privacy

22 Listeners

Privacy Please by Cameron Ivey

Privacy Please

29 Listeners

Hard Fork by The New York Times

Hard Fork

5,482 Listeners

Masters of Privacy by Sergio Maldonado

Masters of Privacy

6 Listeners

"The Data Diva" Talks Privacy Podcast by Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

16 Listeners

We Can Do Hard Things by Treat Media and Glennon Doyle

We Can Do Hard Things

41,486 Listeners

The Mel Robbins Podcast by Mel Robbins

The Mel Robbins Podcast

20,173 Listeners