
Sign up to save your podcasts
Or


AI security is getting marketed like it requires a brand-new playbook—new frameworks, new job titles, new spend.
In this episode of ClearTech Loop, Jo Peterson sits down with Zach Lewis (CIO/CISO and author of Locked Up) for a practical reset: AI doesn’t erase the fundamentals—it punishes you faster when you ignore them. Zach breaks down where GenAI is already helping security programs today (especially tabletop exercises and prioritization), what it actually takes to embed security and privacy into AI models without slowing innovation (data classification, access controls, segmentation, documentation, least privilege), and why adoption fails when leaders treat AI like a tool rollout instead of a behavior change.
The takeaway is simple and actionable: get the foundations right, use AI to reduce friction where it matters, and build a culture where AI augments people rather than creating fear.
Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quotes
“Strong AI security… starts with doing the basics well.” — Zach Lewis
“One of the best use cases I found for it was tabletop exercises.” — Zach Lewis
“You had a billion alerts… and you’re like, which one’s important?” — Jo Peterson
Three Big Ideas from This Episode
1) AI security is data discipline + access discipline
Before you talk tools, talk foundations: classify data before it touches a model, segment critical workloads, gate access by role and sensitivity, document prompts/sources/model versions, and enforce least privilege with ongoing testing and validation.
2) GenAI can make readiness more real (tabletop exercises)
Instead of running the same scripted scenario every year, GenAI can generate realistic incidents, inject curveballs, and help teams identify missed actions—turning tabletops into a real maturity-building loop.
3) Adoption is a leadership problem, not a platform problem
AI initiatives stall when people are afraid or unsupported. Training, shared use cases, and visible wins (time saved, friction removed) create a safe environment where AI augments work rather than threatening it.
📘 Zach’s book: Locked Up: Cybersecurity Threat Mitigation Lessons from a Real-World LockBit Ransomware Response https://www.amazon.com/Locked-Cybersecurity-Mitigation-Real-World-Ransomware/dp/1394357044
Resources Mentioned
🎧 Listen: In Buzzsprout Player
▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist
📰 Subscribe to the Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/
By ClearTech Research / Jo PetersonAI security is getting marketed like it requires a brand-new playbook—new frameworks, new job titles, new spend.
In this episode of ClearTech Loop, Jo Peterson sits down with Zach Lewis (CIO/CISO and author of Locked Up) for a practical reset: AI doesn’t erase the fundamentals—it punishes you faster when you ignore them. Zach breaks down where GenAI is already helping security programs today (especially tabletop exercises and prioritization), what it actually takes to embed security and privacy into AI models without slowing innovation (data classification, access controls, segmentation, documentation, least privilege), and why adoption fails when leaders treat AI like a tool rollout instead of a behavior change.
The takeaway is simple and actionable: get the foundations right, use AI to reduce friction where it matters, and build a culture where AI augments people rather than creating fear.
Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quotes
“Strong AI security… starts with doing the basics well.” — Zach Lewis
“One of the best use cases I found for it was tabletop exercises.” — Zach Lewis
“You had a billion alerts… and you’re like, which one’s important?” — Jo Peterson
Three Big Ideas from This Episode
1) AI security is data discipline + access discipline
Before you talk tools, talk foundations: classify data before it touches a model, segment critical workloads, gate access by role and sensitivity, document prompts/sources/model versions, and enforce least privilege with ongoing testing and validation.
2) GenAI can make readiness more real (tabletop exercises)
Instead of running the same scripted scenario every year, GenAI can generate realistic incidents, inject curveballs, and help teams identify missed actions—turning tabletops into a real maturity-building loop.
3) Adoption is a leadership problem, not a platform problem
AI initiatives stall when people are afraid or unsupported. Training, shared use cases, and visible wins (time saved, friction removed) create a safe environment where AI augments work rather than threatening it.
📘 Zach’s book: Locked Up: Cybersecurity Threat Mitigation Lessons from a Real-World LockBit Ransomware Response https://www.amazon.com/Locked-Cybersecurity-Mitigation-Real-World-Ransomware/dp/1394357044
Resources Mentioned
🎧 Listen: In Buzzsprout Player
▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist
📰 Subscribe to the Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/