
Sign up to save your podcasts
Or


Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
By Ben Lorica4.6
3636 ratings
Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.

2,690 Listeners

289 Listeners

1,089 Listeners

625 Listeners

302 Listeners

226 Listeners

269 Listeners

211 Listeners

95 Listeners

511 Listeners

131 Listeners

227 Listeners

33 Listeners

35 Listeners

40 Listeners