Artificial Intelligence Is Putting Innocent People at Risk of Being Incarcerated

There are at least seven confirmed cases of misidentification due to facial recognition technology, six of which involve Black people who have been wrongfully accused.

02.14.24 By Alyxaundria Sanford

Image: (Kathleen Crosby/Innocence Project)

Image: (Kathleen Crosby/Innocence Project)

Robert Williams thought it was a prank call that his wife received saying he needed to turn himself in to the police. But when the Michigan resident pulled into his driveway, the Detroit police officer who had been waiting outside Mr. Williams’ home pulled up behind him, got out of the car and placed him under arrest. He was detained for 30 hours.

Mr. Williams’ encounter with the police that day in January 2020 was the first documented case of wrongful arrest due to the use of facial recognition technology (FRT). He was accused of stealing thousands of dollars worth of Shinola watches. Grainy surveillance footage provided to law enforcement was run through facial recognition software and matched to an expired driver’s license photo of Mr. Williams.

There are at least seven confirmed cases of misidentification due to the use of facial recognition technology, six of which involve Black people who have been wrongfully accused: Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, Alonzo Sawyer, and Robert Williams.

There has been concern that FRT and other artificial intelligence (AI) technologies will exacerbate racial inequities in policing and the criminal legal system. Research shows that facial recognition software is significantly less reliable for people of color, especially Black and Asian people, as algorithms struggle to distinguish facial features and darker skin tones. Another study concluded that disproportionate arrests of Black people by law enforcement agencies using FRT may be the result of  “the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues.” 

  • “The technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt.”

Chris Fabricant

What is particularly worrying is that the adoption and use of AI, such as FRT, by law enforcement echoes previous examples of the misapplication of forensic science including bite mark analysis, hair comparisons, and arson investigation that have led to numerous wrongful convictions.

“The technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt. Often without ever having been subject to any kind of scrutiny,” said Chris Fabricant, Innocence Project’s director of strategic litigation and author of Junk Science and the American Criminal Justice System.

“Corporations are making claims about the abilities of these techniques that are only supported by self-funded literature,” said Mr. Fabricant. “Politicians and law enforcement that spend [a lot of money] acquiring them, then are encouraged to tout their efficacy and the use of this technology.” 

Image: (Kathleen Crosby/Innocence Project)

Image: (Kathleen Crosby/Innocence Project)

DNA has been essential in proving the innocence of people who have been wrongfully convicted for decades as a result of faulty forensic methods. Indeed, half of all DNA exonerations were the result of false or misleading forensic evidence. And of the 375 DNA exonerations in the United States between 1989 and 2020, 60% of the people freed were Black, according to Innocence Project data.

Still, not all cases lend themselves to DNA exonerations, especially those that involve police use of AI. For this reason, the Innocence Project is proactively pursuing pretrial litigation and policy advocacy to prevent the use of unreliable AI technology, and the misuse of even potentially reliable AI technology, before the damage is done.

“Many of these cases …  are just as susceptible to the same risks and factors that we’ve seen produce wrongful convictions in the past,” said Mitha Nandagopalan, a staff attorney in Innocence Project’s strategic litigation department. 

Mx. Nandagopalan is leading the Innocence Project’s latest endeavor to counter the potentially damaging effects of AI in policing, particularly in communities of color. 

Dr. Yusef Salaam on Oct. 29, 2019 in New York. (Larry Busacca/AP Images for the Innocence Project)
“These are great reasons to go to your local city council or town council meetings.”
“These are great reasons to go to your local city council or town council meetings.”

Amanda Wallwin

New York City Councilman Dr. Yusef Salaam on Oct. 29, 2019 in New York. (Larry Busacca/AP Images for the Innocence Project)

“What is often seen in poorer neighborhoods or primarily [communities of color] is surveillance that is imposed by state and municipal actors, sometimes in line with the wishes of that community and sometimes not. It’s the people who live there that are the target in their own neighborhoods,” said Mx. Nandagopalan.

“In wealthier neighborhoods, whiter neighborhoods, I think you often see surveillance that is being bought and invited by residents.” These technologies include Ring doorbell cameras and homeowners’ associations contracting license plate reader companies such as FLOCK Safety.

The Neighborhood Project is a multidisciplinary effort to understand how surveillance technologies, like FRT, impact a community and may contribute to wrongful convictions. The project will focus on a particular location and partner with community members and local organizations to challenge and prevent the use of unreliable and untested technologies.

“Ultimately, what we want is for the people who would be most impacted by the surveillance technology to have a say in whether, and how it’s getting used in their communities,” said Mx. Nandagopalan.

Last year, the Biden administration issued an executive order to set standards and manage the risk of AI including a standard to develop “tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.” However, there are no federal policies currently in place to regulate the use of AI in policing.

In the meantime, there are ways for concerned community members to influence and encourage local leaders to regulate the use of these technologies by local law enforcement and other agencies. 

“These are great reasons to go to your local city council or town council meetings. That’s where these [tech presentations] happen. On the very local level, those representatives are the people who are voting whether to use tax dollars or public money to fund this stuff,” said Amanda Wallwin, one of the Innocence Project’s state policy advocates.

“If you can be there, if you’re in the room, you can make such a difference.”

 

Watch “Digital Dilemmas: Exploring the Intersection of Technology, Race, and Wrongful Conviction” from Just Data, the Innocence Project’s annual virtual gathering dedicated to promoting practical research to advance the innocence movement.

 

Leave a Reply

Thank you for visiting us. You can learn more about how we consider cases here. Please avoid sharing any personal information in the comments below and join us in making this a hate-speech free and safe space for everyone.

This field is required.
This field is required.
This field is required.

Amanda Wolberg March 16, 2024 at 10:29 pm Reply   

I don’t even like using ai in policing it’s just gonna take over

Connie Campbell March 3, 2024 at 8:28 pm Reply   

Very informative! I have been leery of how AI can be used and this reinforces that much more research needs to be done and standards need to be set. I am especially concerned how this can impact people of color as our 2 children are Asian.

See More

We've helped free more than 240 innocent people from prison. Support our work to strengthen and advance the innocence movement.