
Sign up to save your podcasts
Or


In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
By Tech Policy Press4.9
3333 ratings
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.

314 Listeners

4,225 Listeners

4,113 Listeners

3,530 Listeners

507 Listeners

6,304 Listeners

6,122 Listeners

1,635 Listeners

577 Listeners

5,576 Listeners

16,525 Listeners

366 Listeners

3,538 Listeners

125 Listeners

398 Listeners