
Sign up to save your podcasts
Or


Recent years have seen a boom in biometric security systems—identification measures based on a person’s individual biology—from unlocking smartphones, to automating border controls. As this technology becomes more prevalent, some cybersecurity researchers are worried about how secure biometric data is—and the risk of spoofs. If generative AI becomes so powerful and easy-to-use that deepfake audio and video could hack into our security systems, what can be done?
Bruce Schneier, a security technologist at Harvard University and the author of “A Hacker’s Mind”, explores the cybersecurity risks associated with biometrics, and Matthias Marx, a security researcher, discusses the consequences of bad actors obtaining personal data. If artificial intelligence could overcome security systems, human implants may be used as authentication, according to Katina Michael, a professor at Arizona State University. Plus, Joseph Lindley, a design academic at Lancaster University, proposes how security systems can be better designed to avoid vulnerabilities. To think about practical solutions, Scott Shapiro, professor at Yale Law School and author of “Fancy Bear Goes Phishing”, puts generative AI into the wider context of cybersecurity. Finally, Tim Cross, The Economist’s deputy science editor, weighs up the real-world implications of our thought experiment. Kenneth Cukier hosts.
Learn more about detecting deepfakes at economist.com/detecting-deepfakes-pod, or listen to all of our generative AI coverage at economist.com/AI-pods.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
By The Economist4.8
582582 ratings
Recent years have seen a boom in biometric security systems—identification measures based on a person’s individual biology—from unlocking smartphones, to automating border controls. As this technology becomes more prevalent, some cybersecurity researchers are worried about how secure biometric data is—and the risk of spoofs. If generative AI becomes so powerful and easy-to-use that deepfake audio and video could hack into our security systems, what can be done?
Bruce Schneier, a security technologist at Harvard University and the author of “A Hacker’s Mind”, explores the cybersecurity risks associated with biometrics, and Matthias Marx, a security researcher, discusses the consequences of bad actors obtaining personal data. If artificial intelligence could overcome security systems, human implants may be used as authentication, according to Katina Michael, a professor at Arizona State University. Plus, Joseph Lindley, a design academic at Lancaster University, proposes how security systems can be better designed to avoid vulnerabilities. To think about practical solutions, Scott Shapiro, professor at Yale Law School and author of “Fancy Bear Goes Phishing”, puts generative AI into the wider context of cybersecurity. Finally, Tim Cross, The Economist’s deputy science editor, weighs up the real-world implications of our thought experiment. Kenneth Cukier hosts.
Learn more about detecting deepfakes at economist.com/detecting-deepfakes-pod, or listen to all of our generative AI coverage at economist.com/AI-pods.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.

4,225 Listeners

781 Listeners

930 Listeners

363 Listeners

96 Listeners

108 Listeners

684 Listeners

232 Listeners

2,592 Listeners

47 Listeners

1,089 Listeners

1,409 Listeners

153 Listeners

115 Listeners

102 Listeners

37 Listeners

496 Listeners

892 Listeners

371 Listeners

499 Listeners

78 Listeners

194 Listeners

146 Listeners

72 Listeners

100 Listeners

263 Listeners

33 Listeners