AI Use in the Criminal Justice and Immigration Systems Requires Strong Safeguards

As AI use spreads, policymakers must ensure these tools are transparent, accountable, and used only in ways that reduce harm.
Erica Bryant Associate Director of Writing
Mar 18, 2026

Artificial intelligence (AI) errors, hallucinations, and mistakes are well documented. Yet despite these risks, its use is spreading widely in the criminal justice system, where the stakes are much too high for the misuse of risky and experimental tools. To guide responsible AI adoption, the Vera Institute of Justice developed five AI accountability principles outlining how AI should be deployed safely, transparently, and equitably. Because when AI models get things wrong in criminal investigations, innocent people can end up behind bars. 

Robert Williams, for example, was arrested and jailed after facial recognition technology wrongfully matched grainy surveillance footage from a robbery with his old driver’s license photo, placing him behind bars for a crime he did not commit. Williams, who was handcuffed in front of his young daughters and forced to spend the night on the floor of a filthy jail cell, was finally released only after an officer admitted that “the computer must have gotten it wrong.” He has since campaigned against the irresponsible use of AI in criminal justice, saying that Detroit police should not have been able to obtain an arrest warrant based on “an out-of-focus image of a large Black man in a baseball cap that a faulty algorithm had determined was me.” 

Indeed, such risks are far too high to deploy AI tools in criminal justice proceedings without strong safeguards and clear accountability principles. Though the Trump administration has loosened AI regulations, lawmakers and law enforcement agencies still have a duty to ensure that AI use in the criminal justice and immigration systems is fair, accountable, and transparent. “In the criminal justice field, unchecked AI can destroy people’s lives,” said Karen Tan, Vera’s director of innovation and strategy. “Without proper audits and human oversight, there are serious questions about whether it should be used at all.”

To protect people and communities from harm, AI must only be deployed in ways that shrink the footprint of incarceration, are guided by clear objectives, include rigorous human oversight, center community-defined safety principles, and are fully disclosed to the public. The reasons for caution are well-documented, including: 

  • widespread evidence that facial recognition technology produces higher rates of false positive matches for Asian, Black, and Native American faces compared to white faces;
  • racial bias found in predictive risk assessment and policing tools that are trained on crime data, reinforcing historic overpolicing of marginalized communities due to systemic racism and magnifying racial inequities;
  • license plate reader errors that falsely identify cars as stolen, leading to innocent people being stopped, arrested at gunpoint, and harmed; and
  • lack of transparency in how AI systems make bail, sentencing, and parole recommendations due to opaque “black box” algorithms, making it difficult for attorneys to scrutinize and challenge unfair decisions. 

Given these risks, criminal justice and immigration agencies should avoid using AI in ways that expand the reach of those systems, heighten public surveillance, or increase police deployment. “Unchecked AI entrenches existing biases in the system,” said Tan. “This makes the justice system more unjust at scale.” 

Instead, AI should be used only to shrink the footprint of mass incarceration and reduce unnecessary criminalization. Before implementing any AI system, stakeholders must weigh whether the risks—biased or incorrect outcomes, user errors, and lack of transparency—are worth potential gains. They must also ensure their infrastructure can support the technology, by confirming data quality, training users thoroughly, and providing for ongoing maintenance. Finally, agencies should always disclose their use of AI and the related safeguards against errors they’ve put in place so the public can monitor its use, address bias or error, and hold stakeholders accountable.

Critically, AI should serve as a tool to aid—not replace—human judgment. AI systems must have a designated human entity responsible and accountable for their use and outputs. This entity must perform regular checks for errors and bias and adherence to data ethics. “Detroit police were supposed to treat face recognition matches as an investigative lead, not as the only proof they need to charge someone with a crime,” Williams wrote in a letter to the California Assembly’s public safety committee considering the use of technology by police. “They should have collected corroborating evidence such as an eyewitness identification, cell phone location data or a fingerprint.”  

When used in the criminal justice and immigration context, AI adoption must center the people and communities that will be impacted. As the use of AI spreads, safeguards are critical to prevent harm to innocent people and ensure that technology serves the public good rather than undermining it.  When introducing AI, agencies should center accountability and oversight principles that make certain that AI tools are deployed to shrink mass incarceration—not expand it. 

Related