She Said Privacy/He Said Security

Real AI Risks No One Wants To Talk About And What Companies Can Do About Them


Listen Later

Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies.

In this episode…

AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools? 

Managing AI risk requires governance and the ability to test AI tools before deploying them. That’s why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it’s important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.

...more
View all episodesView all episodes
Download on the App Store

She Said Privacy/He Said SecurityBy Jodi and Justin Daniels

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

12 ratings


More shows like She Said Privacy/He Said Security

View all
Security Now (Audio) by TWiT

Security Now (Audio)

1,982 Listeners

Global News Podcast by BBC World Service

Global News Podcast

7,671 Listeners

TED Radio Hour by NPR

TED Radio Hour

21,912 Listeners

Pivot by New York Magazine

Pivot

9,270 Listeners

The Privacy Advisor Podcast by Jedidiah Bracy, IAPP Editorial Director

The Privacy Advisor Podcast

65 Listeners

The Daily by The New York Times

The Daily

110,845 Listeners

Darknet Diaries by Jack Rhysider

Darknet Diaries

7,912 Listeners

CISO Series Podcast by David Spark, Mike Johnson, and Andy Ellis

CISO Series Podcast

190 Listeners

Serious Privacy by Dr. K Royal, Paul Breitbarth & Ralph O'Brien

Serious Privacy

23 Listeners

Privacy Please by Cameron Ivey

Privacy Please

28 Listeners

Surveillance Report by Techlore & The New Oil

Surveillance Report

95 Listeners

Hard Fork by The New York Times

Hard Fork

5,448 Listeners

The Dershow by Alan Dershowitz | Kast Media

The Dershow

2,081 Listeners

Masters of Privacy by Sergio Maldonado

Masters of Privacy

6 Listeners

"The Data Diva" Talks Privacy Podcast by Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

16 Listeners