
Sign up to save your podcasts
Or


AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
By The Alan Turing Institute5
55 ratings
AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.

43,601 Listeners

892 Listeners

4,895 Listeners

321 Listeners

113,432 Listeners

669 Listeners

8,685 Listeners

4,205 Listeners

200 Listeners

3,212 Listeners

98 Listeners

15,887 Listeners

16,435 Listeners

3,374 Listeners