Current statistics paint a bleak picture for American children: about one in four kids will experience some form of abuse or neglect at some point in their lifetimes. But, what if we could predict the likelihood of abuse before it happens? What about at birth?
Social scientists and computer programmers are hoping to do just that.
New predictive risk models that promise to be able to determine the likelihood of abuse or neglect are being deployed in public child protective service agencies around the country. However, poorly implemented algorithms have real-world impacts on real people.
When used in child welfare cases, algorithms consider things like interactions with police or the welfare system. However, many of these data are proxies for race or poverty. For example, people are more likely to call police on a Black family and give a white family the benefit of the doubt. That interaction with police then becomes data an algorithm considers when determining risk.
Again, the data are biased because it comes from biased people, and sometimes the data are even racist. A computer doesn’t know the difference between a racist complaint and a real one. They are both data, and in a computer’s eyes, equally as useful.
This episode is a production of the Department of Motion Pictures and Stories of Change, a partnership of the Sundance Institute and the Skoll Foundation, with support from IFP and Quinnipiac University. Our editor is John Dankosky. Our mixers are Ben Kruse and Henry Bellingham. Our producers are Elizabeth Lodge Stepp and Michael Gottwald. Executive Produced by Josh Penn. Research by Kate Osborn. Fact checking by Jacqueline Rabe Thomas. Additional reporting by Colleen Shaddox. Special Thanks to Emily Jampel.
Learn more about your ad choices. Visit megaphone.fm/adchoices