
Sign up to save your podcasts
Or


At 4:47 p.m., someone pastes a customer escalation into an AI assistant and asks it to rewrite the tone. The reply is perfect. It also includes a private note from the internal thread. No breach. No attacker. Just a new workflow that doesn't know what should stay inside.
This episode breaks down how to secure AI tools in the workplace by treating them like any other system that handles sensitive information and influences decisions. It covers the three patterns where AI quietly breaks: sensitive data going in through normal use, assistants being steered by hidden instructions inside documents they read (prompt injection), and over-connected AI with too much autonomy and too little friction. The episode references NIST's AI Risk Management Framework, OWASP's Generative AI Security Project and LLM Top 10, and practitioners like Rob T. Lee and Chris Cochran for ongoing grounded guidance. The starter kit covers four moves in order: creating an approved AI lane with company identity and strong authentication, putting guardrails around sensitive data, limiting connectors and permissions with a human in the loop, and making usage observable through logging and adversarial testing.
Whether you're rolling out AI tools to your team or trying to secure what people are already using, Plaintext with Rich provides the baseline.
Is there a topic/term you want me to discuss next? Text me!!
YouTube more your speed? → https://links.sith2.com/YouTube
Apple Podcasts your usual stop? → https://links.sith2.com/Apple
Neither of those? Spotify’s over here → https://links.sith2.com/Spotify
Prefer reading quietly at your own pace? → https://links.sith2.com/Blog
Join us in The Cyber Sanctuary (no robes required) → https://links.sith2.com/Discord
Follow the human behind the microphone → https://links.sith2.com/linkedin
Need another way to reach me? That’s here → https://linktr.ee/rich.greene
By Rich GreeneAt 4:47 p.m., someone pastes a customer escalation into an AI assistant and asks it to rewrite the tone. The reply is perfect. It also includes a private note from the internal thread. No breach. No attacker. Just a new workflow that doesn't know what should stay inside.
This episode breaks down how to secure AI tools in the workplace by treating them like any other system that handles sensitive information and influences decisions. It covers the three patterns where AI quietly breaks: sensitive data going in through normal use, assistants being steered by hidden instructions inside documents they read (prompt injection), and over-connected AI with too much autonomy and too little friction. The episode references NIST's AI Risk Management Framework, OWASP's Generative AI Security Project and LLM Top 10, and practitioners like Rob T. Lee and Chris Cochran for ongoing grounded guidance. The starter kit covers four moves in order: creating an approved AI lane with company identity and strong authentication, putting guardrails around sensitive data, limiting connectors and permissions with a human in the loop, and making usage observable through logging and adversarial testing.
Whether you're rolling out AI tools to your team or trying to secure what people are already using, Plaintext with Rich provides the baseline.
Is there a topic/term you want me to discuss next? Text me!!
YouTube more your speed? → https://links.sith2.com/YouTube
Apple Podcasts your usual stop? → https://links.sith2.com/Apple
Neither of those? Spotify’s over here → https://links.sith2.com/Spotify
Prefer reading quietly at your own pace? → https://links.sith2.com/Blog
Join us in The Cyber Sanctuary (no robes required) → https://links.sith2.com/Discord
Follow the human behind the microphone → https://links.sith2.com/linkedin
Need another way to reach me? That’s here → https://linktr.ee/rich.greene