
Sign up to save your podcasts
Or


Inside the AI Deepfake Threat
What if the voice confirming your wire transfer wasn't actually your client? Ben Colman, founder and CEO of Reality Defender, joins host John Richards to unpack one of the fastest-growing attack surfaces in cybersecurity: AI-generated deepfakes. Once the exclusive domain of Hollywood studios and nation-state actors, real-time voice and video impersonation is now accessible to anyone with a laptop—and fraudsters are scaling up fast.
From Specialized Hardware to Your Home Computer
Ben traces the evolution from the specialized machinery required six years ago to today's world where anyone can clone a voice with less than five seconds of audio—locally, for free, using open-source models. He walks through the modern fraud landscape, from grandparent scams and bank account takeovers to an eye-opening story about fake job applicants that will make any recruiting team rethink its screening process.
Reality Defender's approach is built for how organizations actually work—plugging directly into call centers, video conferencing platforms, and identity verification tools through a simple API, rather than asking teams to adopt yet another standalone product. Their probabilistic detection models scan in real time across thousands of indicators, all without storing or comparing against any biometric data.
John and Ben also get into the emerging frontier of agentic AI—what happens when you need to authenticate an AI voice agent rather than a human—and how smart permission gates can define exactly what those agents are and aren't allowed to do.
Questions We Answer in This Episode
Key Takeaways
The deepfake threat isn't coming—it's already here, hitting call centers, recruiting pipelines, and financial institutions every day. Whether you're a developer looking to integrate detection into your stack or a security leader trying to get ahead of the next wave, this conversation is a essential listen.
Resources
By TruStory FM5
66 ratings
Inside the AI Deepfake Threat
What if the voice confirming your wire transfer wasn't actually your client? Ben Colman, founder and CEO of Reality Defender, joins host John Richards to unpack one of the fastest-growing attack surfaces in cybersecurity: AI-generated deepfakes. Once the exclusive domain of Hollywood studios and nation-state actors, real-time voice and video impersonation is now accessible to anyone with a laptop—and fraudsters are scaling up fast.
From Specialized Hardware to Your Home Computer
Ben traces the evolution from the specialized machinery required six years ago to today's world where anyone can clone a voice with less than five seconds of audio—locally, for free, using open-source models. He walks through the modern fraud landscape, from grandparent scams and bank account takeovers to an eye-opening story about fake job applicants that will make any recruiting team rethink its screening process.
Reality Defender's approach is built for how organizations actually work—plugging directly into call centers, video conferencing platforms, and identity verification tools through a simple API, rather than asking teams to adopt yet another standalone product. Their probabilistic detection models scan in real time across thousands of indicators, all without storing or comparing against any biometric data.
John and Ben also get into the emerging frontier of agentic AI—what happens when you need to authenticate an AI voice agent rather than a human—and how smart permission gates can define exactly what those agents are and aren't allowed to do.
Questions We Answer in This Episode
Key Takeaways
The deepfake threat isn't coming—it's already here, hitting call centers, recruiting pipelines, and financial institutions every day. Whether you're a developer looking to integrate detection into your stack or a security leader trying to get ahead of the next wave, this conversation is a essential listen.
Resources

445 Listeners

8 Listeners

21 Listeners

37 Listeners

139 Listeners

38 Listeners

29 Listeners

12 Listeners

104 Listeners

4 Listeners

2 Listeners

8 Listeners

0 Listeners

69 Listeners

0 Listeners

5 Listeners

8 Listeners

0 Listeners

0 Listeners

2 Listeners