
Sign up to save your podcasts
Or


Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
By Ben Lorica4.6
3636 ratings
Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are based on Unicode and other encoding systems.
Download a FREE copy of our recent NLP Industry Survey Results: https://gradientflow.com/2021nlpsurvey/
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.

478 Listeners

1,099 Listeners

303 Listeners

236 Listeners

268 Listeners

214 Listeners

90 Listeners

507 Listeners

132 Listeners

25 Listeners

38 Listeners

60 Listeners

35 Listeners

22 Listeners

40 Listeners