
Sign up to save your podcasts
Or
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
4.6
2828 ratings
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
446 Listeners
6,293 Listeners
3,147 Listeners
10,700 Listeners
269 Listeners
1,472 Listeners
394 Listeners
537 Listeners
259 Listeners
5,462 Listeners
15,335 Listeners
3,362 Listeners
44 Listeners
315 Listeners
72 Listeners