top of page

A Forensic Without Certainty: Facial Recognition and Wrongful Arrests

Facial recognition technology (FRT) is increasingly used by law enforcement agencies in situations where a crime is captured on video and investigators need to identify a suspect, allowing officers to upload a still image from surveillance footage into a system that compares it against large databases of images, including driver’s license photos, mugshots, and even publicly available images from social media. The technology analyzes facial characteristics and converts those measurements into a biometric “faceprint,” ultimately producing a list of possible matches that investigators can then use to develop suspects.

Although this technology is often presented as an efficient and objective investigative tool, its growing use in criminal investigations has raised serious concerns, particularly as researchers have warned that facial recognition technology is “likely an unreliable source of identity evidence” and may increase the number of innocent people who are investigated or arrested (Georgetown Law Center, 2022). These concerns are not simply theoretical, but instead are reflected in a growing number of real-world cases where reliance on facial recognition results has directly led to wrongful arrests.

The risks become even more pronounced when police rely on extremely large facial databases, as studies have shown that factors such as database size, the quality of the image used in the search, and the race of the individual being identified can all significantly increase the likelihood of false matches (Georgetown Law Center, 2022). Even when these systems report relatively low error rates, false positives still occur, and when they do, the consequences can be severe, leading civil rights advocates to warn that facial recognition technology “puts innocent people at risk of wrongful arrest,” particularly when officers begin to treat algorithmic results as reliable evidence rather than preliminary leads (Wessler, 2024).

Another major concern involves racial bias, as numerous studies have demonstrated that facial recognition systems misidentify Black individuals and other people of color at higher rates than white individuals, raising the concern that the technology may not only reflect existing inequalities within the criminal justice system but also actively worsen them (Georgetown Law Center, 2022).

These risks are not hypothetical, as demonstrated by the case of Robert Williams in Michigan, who in January 2020 was arrested at his home after facial recognition technology identified him as a suspect in a robbery, even though investigators later confirmed that he was not the individual in the surveillance footage. Even so, Williams spent approximately thirty hours in police custody, and the arrest, which took place in front of his wife and young children, illustrates the very real human consequences that can result from technological error (Morioka, 2024).

Williams’ case was not an isolated incident, as multiple documented wrongful arrests across the United States have occurred due to incorrect facial recognition matches, with most of these cases involving Black individuals, further reinforcing concerns about both the reliability and equity of the technology (Wessler, 2024). While police departments often describe facial recognition results as merely “investigative leads,” in practice, officers may begin to treat those results as conclusive evidence, which can lead to confirmation bias and limit the scope of the investigation.

This issue is further compounded by automation bias, a phenomenon in which individuals place disproportionate trust in computer-generated outputs. Consequently, once investigators believe they have identified a suspect through facial recognition, they may unintentionally prioritize evidence that confirms this identification while overlooking information that contradicts it. In response to these concerns, policymakers and advocacy groups have begun to push for stronger restrictions, and more than twenty jurisdictions across the United States have already adopted limitations or outright bans on police use of facial recognition technology, reflecting a growing recognition that the risks associated with the technology cannot be ignored (Gross, 2025).

The case of Robert Williams ultimately led to significant policy reforms, as the Detroit Police Department, following a settlement agreement, implemented new safeguards that prohibit arrests based solely on facial recognition results and require additional investigative verification before taking action (Morioka, 2024).

While facial recognition technology will likely continue to play a role in criminal investigations as artificial intelligence tools become more widely integrated into law enforcement practices, the growing number of wrongful arrests demonstrates that the technology is not yet reliable enough to be used without strict safeguards, and without stronger oversight, its continued use risks undermining due process protections and increasing the likelihood that innocent individuals will be drawn into the criminal justice system.






References

Georgetown Law Center on Privacy & Technology. (2022). A forensic without the science: Facial recognition in U.S. criminal investigations.https://www.law.georgetown.edu/privacy-technology-center/publications/a-forensic-without-the-science-face-recognition-in-u-s-criminal-investigations/


Gross, P. (2025, February 4). Facial recognition in policing is getting state-by-state guardrails. Stateline.https://stateline.org/2025/02/04/facial-recognition-in-policing-is-getting-state-by-state-guardrails/


Morioka, S. (2024). Flawed facial recognition technology leads to wrongful arrest and historic settlement. University of Michigan Law Quadrangle.https://quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic


Wessler, N. F. (2024, April 30). Police say a simple warning will prevent face recognition wrongful arrests. That’s just not true. ACLU.https://www.aclu.org/news/privacy-technology/police-say-a-simple-warning-will-prevent-face-recognition-wrongful-arrests-thats-just-not-true


 
 
bottom of page