
Sign up to save your podcasts
Or


Can AI actually help us stop scammers by "thinking" like them? Join Emily Taylor and the Oxford Information Labs team as they reveal how they’re using advanced AI to analyse 70 million scam signals without losing the "human touch."
In this deep dive, we explore the methodology behind our latest study into scam and fraud victims, supported by Google.org. You’ll learn how "Prompt Engineering" and "Human Review" work together to process massive datasets that would take a human lifetime to read. We move beyond "victim blaming" terminology to look at how scammers target specific groups—from the "neurospicy" community to the bereaved—and discuss why AI is an essential, ethical tool for protecting researchers from traumatising content.
By Oxford Information LabsCan AI actually help us stop scammers by "thinking" like them? Join Emily Taylor and the Oxford Information Labs team as they reveal how they’re using advanced AI to analyse 70 million scam signals without losing the "human touch."
In this deep dive, we explore the methodology behind our latest study into scam and fraud victims, supported by Google.org. You’ll learn how "Prompt Engineering" and "Human Review" work together to process massive datasets that would take a human lifetime to read. We move beyond "victim blaming" terminology to look at how scammers target specific groups—from the "neurospicy" community to the bereaved—and discuss why AI is an essential, ethical tool for protecting researchers from traumatising content.