
Sign up to save your podcasts
Or


Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement. And just this week, the city of Boston banned facial surveillance technology entirely.
Why? Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.
CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envision how AI is developed and used in communities.
In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker.
Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
By Science Friday and WNYC Studios4.4
59895,989 ratings
Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement. And just this week, the city of Boston banned facial surveillance technology entirely.
Why? Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.
CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envision how AI is developed and used in communities.
In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker.
Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

90,951 Listeners

22,028 Listeners

43,998 Listeners

32,235 Listeners

38,576 Listeners

30,851 Listeners

43,696 Listeners

38,885 Listeners

4,002 Listeners

1,576 Listeners

482 Listeners

941 Listeners

12,695 Listeners

14,450 Listeners

12,084 Listeners

827 Listeners

1,542 Listeners

3,505 Listeners

4,688 Listeners

2,800 Listeners

1,405 Listeners

1,196 Listeners

5,573 Listeners

5,768 Listeners

422 Listeners

16,379 Listeners

6,563 Listeners

667 Listeners

2,823 Listeners

645 Listeners

1,966 Listeners

84 Listeners

251 Listeners

20 Listeners