
Sign up to save your podcasts
Or


In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
By Tech Policy Press4.6
2828 ratings
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.

10,756 Listeners

3,155 Listeners

493 Listeners

6,303 Listeners

287 Listeners

1,603 Listeners

387 Listeners

559 Listeners

258 Listeners

5,481 Listeners

16,098 Listeners

3,445 Listeners

43 Listeners

315 Listeners

71 Listeners