Journalists and academics seem convinced that artificial intelligence is often biased against women and racial minorities. If Washington’s new facial recognition law is a guide, legislators see the same problem. But is it true? It’s not hard to find patterns in AI decisions that have a disparate impact on protected groups. Is this bias? And if so, whose?
Do we assume the worst about decisions with a disparate impact – applying a kind of misanthropomorphism to the machine – or can we objectively analyze the factors behind the decisions? If bias boils down to not producing proportionate results for each protected class, is the only remedy to impose a “proportionate result” constraint on AI processing – essentially imposing racial, ethnic, and gender quotas on every corner of life that is touched by AI?
Featuring:
- Stewart Baker, Partner, Steptoe & Johnson LLP
- Curt Levey, President, Committee for Justice
- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley
Visit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.