If the foundation of your house is made of sand, it will eventually fall, and if the foundation of a legal case is built on questionable evidence; it may do so as well.
That is what is playing out right now in Cleveland, where where a murder case is in the process of potentially falling to pieces, because it was based, in part, on AI-driven facial recognition technology.
At a time when we are all increasingly under surveillance from both public and private security cameras– whether on lamp posts, in convenience stores, or mounted on our neighbors' front doors–our faces are scanned, tracked, and recorded practically everywhere we go, and we don't have any way to stop it.
Then, add in all the pictures we post to social media, and take a moment to consider that all of those images are being stored in a database somewhere–perhaps by a company that makes money by selling them to law enforcement–like for example, Clearview AI.
That company advertises that they provide cutting-edge technology for law enforcement to investigate crimes, and enhance public safety. But they have also faced, and settled a major lawsuit filed by the ACLU in 2022, for allegedly violating privacy laws by collecting images of people without their consent.
Today, we're focusing on another ACLU legal action related to that facial recognition software, but it is not about privacy.
Instead, is about the accuracy and reliability of the software when it is applied to suspects with dark skin, how Cleveland police used it to build a case against murder suspect, Qeyeon Tolbert, and how they hid the fact that they used it, from the Judge who granted them the warrant for the search and seizure of what authorities believe was the murder weapon.
Once the defense found out about it though, and raised an objection; the judge hearing the case excluded the gun from evidence; dealing what could be a death blow to the Prosecution's case. But it is not over yet, and that's where our guest today, Attorney, Nathan "Nate" Freed Wessler, comes in.
He is the Deputy Director of the ACLU's Speech, Privacy, and Technology Project, who entered the case as a "friend of the court"; filing an amicus brief on behalf of the defense, arguing that facial recognition technology is unreliabe, and that police using it without informing either the judge, or the defense, violates the defendant's civil rights.
But while we are using the facts of this case as a jumping off point; the real focus of our conversation is about the technology itself; how it is used, why its accuracy has been questioned on the basis of racial bias, and why some police departments don't use it.
At this point, the appeals judge in Cleveland hasn't yet ruled on whether or not the evidence against 23 year old Qeyeon Tolbert can be used at trial, but questions raised by the facts of the case, are worth considering outside of the context of that particular trial.
What if your picture somehow got included in a police "six pack" (photo lineup) and shown to a witness in a criminal case, and what if you were nowhere near the crime scene, and what if it was all a big mistake, but somehow you landed in jail anyway?
Would it matter to you that the technology used to make the case might be as filled with holes as Swiss cheese? Would it matter to you that police might have broken the law to get it? Would you be willing to plead guilty in hopes of getting a lighter sentence for something you didn't do, just to avoid the possibility of decades in prison?
On the other hand; what if the technology could be improved, what if police were better trained to use it, and what if policymakers came up with "best practice" rules that made this powerful, and potentially really useful technology available for use in a way that both protects the public interest, and the civil rights of the accused?
Find out more, listen now.