
Sign up to save your podcasts
Or


In 2023, Samsung engineers unintentionally shared highly sensitive internal data with an external AI system while performing everyday engineering tasks. There was no breach, no malicious intent, and no vendor misconduct. The risk emerged the moment confidential information left the organization’s control.
This episode examines why relying on “trust us” assurances from AI providers is not a compliance strategy. When organizations use external, API-based AI systems, they often extend trust beyond their security perimeter, auditability, and governance frameworks.
The lesson is clear: compliance is not about intent or promises. It is about architecture, control, and where your data actually lives.
By David William SilvaIn 2023, Samsung engineers unintentionally shared highly sensitive internal data with an external AI system while performing everyday engineering tasks. There was no breach, no malicious intent, and no vendor misconduct. The risk emerged the moment confidential information left the organization’s control.
This episode examines why relying on “trust us” assurances from AI providers is not a compliance strategy. When organizations use external, API-based AI systems, they often extend trust beyond their security perimeter, auditability, and governance frameworks.
The lesson is clear: compliance is not about intent or promises. It is about architecture, control, and where your data actually lives.