
Sign up to save your podcasts
Or
Recent years have seen a boom in biometric security systems—identification measures based on a person’s individual biology—from unlocking smartphones, to automating border controls. As this technology becomes more prevalent, some cybersecurity researchers are worried about how secure biometric data is—and the risk of spoofs. If generative AI becomes so powerful and easy-to-use that deepfake audio and video could hack into our security systems, what can be done?
Bruce Schneier, a security technologist at Harvard University and the author of “A Hacker’s Mind”, explores the cybersecurity risks associated with biometrics, and Matthias Marx, a security researcher, discusses the consequences of bad actors obtaining personal data. If artificial intelligence could overcome security systems, human implants may be used as authentication, according to Katina Michael, a professor at Arizona State University. Plus, Joseph Lindley, a design academic at Lancaster University, proposes how security systems can be better designed to avoid vulnerabilities. To think about practical solutions, Scott Shapiro, professor at Yale Law School and author of “Fancy Bear Goes Phishing”, puts generative AI into the wider context of cybersecurity. Finally, Tim Cross, The Economist’s deputy science editor, weighs up the real-world implications of our thought experiment. Kenneth Cukier hosts.
Learn more about detecting deepfakes at economist.com/detecting-deepfakes-pod, or listen to all of our generative AI coverage at economist.com/AI-pods.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
4.8
570570 ratings
Recent years have seen a boom in biometric security systems—identification measures based on a person’s individual biology—from unlocking smartphones, to automating border controls. As this technology becomes more prevalent, some cybersecurity researchers are worried about how secure biometric data is—and the risk of spoofs. If generative AI becomes so powerful and easy-to-use that deepfake audio and video could hack into our security systems, what can be done?
Bruce Schneier, a security technologist at Harvard University and the author of “A Hacker’s Mind”, explores the cybersecurity risks associated with biometrics, and Matthias Marx, a security researcher, discusses the consequences of bad actors obtaining personal data. If artificial intelligence could overcome security systems, human implants may be used as authentication, according to Katina Michael, a professor at Arizona State University. Plus, Joseph Lindley, a design academic at Lancaster University, proposes how security systems can be better designed to avoid vulnerabilities. To think about practical solutions, Scott Shapiro, professor at Yale Law School and author of “Fancy Bear Goes Phishing”, puts generative AI into the wider context of cybersecurity. Finally, Tim Cross, The Economist’s deputy science editor, weighs up the real-world implications of our thought experiment. Kenneth Cukier hosts.
Learn more about detecting deepfakes at economist.com/detecting-deepfakes-pod, or listen to all of our generative AI coverage at economist.com/AI-pods.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
4,303 Listeners
519 Listeners
934 Listeners
365 Listeners
98 Listeners
227 Listeners
107 Listeners
2,533 Listeners
45 Listeners
1,081 Listeners
1,385 Listeners
116 Listeners
100 Listeners
36 Listeners
879 Listeners
349 Listeners
502 Listeners
68 Listeners
68 Listeners
98 Listeners
97 Listeners
206 Listeners