
Sign up to save your podcasts
Or


Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement. And just this week, the city of Boston banned facial surveillance technology entirely.
Why? Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.
CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envision how AI is developed and used in communities.
In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker.
Subscribe to this podcast. Follow our show on Instagram, TikTok, Facebook, and Bluesky @scifri and sign up for our newsletters. Got a science question that’s keeping you up at night? Call us: 877-4-SCIFRI
By Science Friday and WNYC Studios4.4
60206,020 ratings
Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement. And just this week, the city of Boston banned facial surveillance technology entirely.
Why? Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.
CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envision how AI is developed and used in communities.
In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker.
Subscribe to this podcast. Follow our show on Instagram, TikTok, Facebook, and Bluesky @scifri and sign up for our newsletters. Got a science question that’s keeping you up at night? Call us: 877-4-SCIFRI

91,297 Listeners

21,954 Listeners

43,837 Listeners

32,246 Listeners

38,430 Listeners

30,609 Listeners

43,687 Listeners

38,950 Listeners

1,576 Listeners

484 Listeners

945 Listeners

12,704 Listeners

14,450 Listeners

12,130 Listeners

818 Listeners

1,542 Listeners

3,506 Listeners

2,800 Listeners

1,405 Listeners

9,556 Listeners

1,196 Listeners

5,569 Listeners

5,767 Listeners

421 Listeners

16,512 Listeners

6,592 Listeners

670 Listeners

2,821 Listeners

2,303 Listeners

644 Listeners

1,965 Listeners

82 Listeners

246 Listeners

20 Listeners