
Sign up to save your podcasts
Or
Selection Bias, Confirmation Bias and the Feedback Loop of Predictive Policing Algorithms, the Black Box Problem of Proprietary Algorithms and Lack of Accountability
Discussion with Kristian Lum and William Isaac on how machine learning algorithms work and how seemingly neutral police data can perpetuate systemic and institutional prejudices and produce predictive systems that predict police enforcement rather than future crime. We explore the creation and conclusions of their Oakland case study on the bias of police data sets and how selection bias can produce confirmation bias and a feedback loop, leading to over-policing of communities already overexposed to police activity. We also discuss the lack of transparency and accountability of the current proprietary predictive models and best practices for input data and implementation of predictive systems into future police work.
For More Info: http://thegravity.fm/#/episode/22
5
88 ratings
Selection Bias, Confirmation Bias and the Feedback Loop of Predictive Policing Algorithms, the Black Box Problem of Proprietary Algorithms and Lack of Accountability
Discussion with Kristian Lum and William Isaac on how machine learning algorithms work and how seemingly neutral police data can perpetuate systemic and institutional prejudices and produce predictive systems that predict police enforcement rather than future crime. We explore the creation and conclusions of their Oakland case study on the bias of police data sets and how selection bias can produce confirmation bias and a feedback loop, leading to over-policing of communities already overexposed to police activity. We also discuss the lack of transparency and accountability of the current proprietary predictive models and best practices for input data and implementation of predictive systems into future police work.
For More Info: http://thegravity.fm/#/episode/22