
Sign up to save your podcasts
Or
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
4.7
2727 ratings
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
9,092 Listeners
6,279 Listeners
8,928 Listeners
10,638 Listeners
1,453 Listeners
389 Listeners
475 Listeners
258 Listeners
203 Listeners
5,356 Listeners
583 Listeners
15,023 Listeners
3,129 Listeners
229 Listeners
33 Listeners