
Sign up to save your podcasts
Or


Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise?
In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now.
We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse.
Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel.
Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data.
From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video.
The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts.
We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows.
Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable.
We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out.
When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?
By Neil C. Hughes5
200200 ratings
Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise?
In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now.
We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse.
Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel.
Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data.
From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video.
The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts.
We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows.
Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable.
We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out.
When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?

1,297 Listeners

538 Listeners

1,657 Listeners

1,103 Listeners

627 Listeners

1,025 Listeners

302 Listeners

346 Listeners

234 Listeners

215 Listeners

513 Listeners

138 Listeners

355 Listeners

63 Listeners

675 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners