
Sign up to save your podcasts
Or


Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
By Ben Lorica4.6
3737 ratings
Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.

534 Listeners

1,101 Listeners

624 Listeners

581 Listeners

300 Listeners

347 Listeners

972 Listeners

210 Listeners

312 Listeners

97 Listeners

521 Listeners

138 Listeners

98 Listeners

227 Listeners

649 Listeners