
Sign up to save your podcasts
Or


AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
By The Alan Turing Institute5
55 ratings
AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.

43,691 Listeners

864 Listeners

4,871 Listeners

320 Listeners

113,219 Listeners

746 Listeners

8,036 Listeners

4,167 Listeners

204 Listeners

3,245 Listeners

101 Listeners

15,495 Listeners

16,524 Listeners

3,875 Listeners