This is the first part in a four-part series Recidiviz has written to summarize some of the biggest issues we see with the use of technology in the criminal justice space and to create a “playbook” with specific recommendations that practitioners can utilize in tech procurement, integration, and deployment. As always, we welcome thoughts and feedback.
In 2008, the Los Angeles Police Department (LAPD) began working on a project that was full of promise — to predict crime using data and analytics. Various agencies at the Department of Justice jumped on board, as did academics at UCLA. Within a few years, the resulting software, PredPol (now Geolitica) became a leading vendor of predictive policing technology in the U.S.
But the gleam of this new technology soon faded. Several studies proved that the use of historical data in these systems may have led to discriminatory policing and reinforced racial inequalities. In 2019, LAPD’s own internal audit concluded that there was insufficient data to determine if PredPol actually achieved its purpose — to reduce crime. By 2020, the LAPD had stopped using PredPol entirely — and the Fourth Circuit Court of Appeals had delivered a blow to the constitutionality of these technologies altogether.
Despite the blow, enthusiasm continues to grow for incorporating technology into the criminal justice ecosystem. And rightfully so, as new technology has helped to decrease failure to appear rates in court, automate criminal record clearance, and improve policy decisions through modeling. But at the end of the day, tech is a hammer — helpful when used in the right way, good at creating a mess when it’s not.
The majority of the Recidiviz team comes from tech: we are a team of data scientists, software engineers, product managers, and designers. And while we’ve seen tech as a game-changer, many have also seen the downside. In this post, we’ll explore where things can go wrong in criminal justice work when technology is inaccurate, biased, or used improperly. We’ll cover why it pays to be cautious in applying new tools to old problems, and how to take the community, the justice-involved, and the agencies that serve both into consideration when looking at the evolving tech landscape.
Technology is increasingly used at every stage of the justice system: to detect gunshots; to dispatch squad cars to far-flung neighborhoods; to decide whether defendants will be eligible for community service or diversion programs; and to monitor the location of parole and probationers to make sure they don’t miss curfew. These decisions have the potential to improve the efficacy of the justice system, but can also cause harm to already-vulnerable communities.
In the following table, we highlight some real-world examples of how these technologies have led to negative consequences:
These challenges can become uniquely troubling in the context of criminal justice. In the consumer context, even the most egregious privacy-abusing app can be uninstalled by its users; by contrast, the criminal justice system is compulsory for the people under its care. What’s more, those of us who help select or deploy new criminal justice technology rarely have direct experience with the justice system ourselves, due to CJIS regulations on who can work with this kind of data — making it less likely that the tech ecosystem working in this area can fully empathize with the people it impacts.
In the table below, we highlight some of the ways these risks intersect with new uses of technology in the criminal justice system. It’s not exhaustive, and most risks compound (e.g., predictive policing systems can be biased due to their training data, which directly reinforces historical bias and indirectly undermines the accuracy of the tools).
It’s apparent from the chart above that not all harms are equal — an unnecessary arrest is not the same as a leaked email address. When evaluating new technology, the following considerations can be helpful in putting the pitfalls in context.
Another consideration is the negative impact on criminal justice agencies themselves. Agencies have faced diminished public trust, lawsuits, and significant new compliance regimes in response to the adoption of tools that unintentionally create biased outcomes or lack of transparency. When agencies can’t explain what led to a certain result, trust in both the tool and the process can erode. Clear documentation and processes for data access are crucial — a good example is the recent push for free and timely release of body camera footage in California, which gained enough public support to pass as law in 2019.
Finally, it’s easy for technology to create reservoirs where biases can propagate. Tools that rely on historical data are used in every step of the criminal justice system, from predictive policing and surveillance enabled by facial recognition, to sentencing and release-date calculations.
Carefully weighing community harms against community benefits can help agencies to decide when technology is the right tool for the job, and when it’s just adding risk. When technology does make sense to help solve a problem, evaluating potential edge cases and establishing safeguards to protect against these risks can be a small investment that saves lives, time, and resources for the agency and the community in the long-run.
We believe there is a useful, even crucial, role for certain types of technologies to play in the criminal justice system. But indiscriminate use of technology can also cause harm to people and weaken public trust in the system. This isn’t new — polygraph machines, social media-based surveillance, and other ill-fitting technology have come and gone before. What is new is the increased enthusiasm around information technology, and the very real limitations of it for some applications.
Over the next few blog posts, we’ll dig into a few of the structural issues that have made technology hit-and-miss in this domain, scrutinize the types of artificial intelligence used across different criminal justice technologies, and synthesize a “playbook” that we hope will be helpful to practitioners navigating this increasingly complex space.
We would like to thank Clementine Jacoby, Samantha Harvell, Mayuka Sarukkai, and Jason Tashea for their helpful comments and contributions.
Recidiviz is proud to be a GLG Social Impact Fellow. We would like to thank GLG for, among other things, connecting us to some of the experts who provided valuable insights showcased in this post.