
Sign up to save your podcasts
Or


AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
By The Alan Turing Institute5
55 ratings
AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.

43,540 Listeners

893 Listeners

4,848 Listeners

295 Listeners

112,356 Listeners

649 Listeners

8,190 Listeners

4,149 Listeners

194 Listeners

3,188 Listeners

89 Listeners

14,371 Listeners

16,106 Listeners

3,148 Listeners