THERE IS A stretch of highway through the Ozark Mountains where being data-driven is a hazard.
Heading from Springfield, Missouri, to Clarksville, Arkansas, navigation apps recommend the Arkansas 43. While this can be the fastest route, the GPS’s algorithm does not concern itself with factors important to truckers carrying a heavy load, such as the 43’s 1,300-foot elevation drop over four miles with two sharp turns. The road once hosted few 18-wheelers, but the last two and half years have seen a noticeable increase in truck traffic—and wrecks. Localswho have watched accidents increase think it is only a matter of time before someone is seriously hurt, or worse.
Truckers familiar with the region know that Highway 7 is a safer route. However, the algorithm creating the route recommendation does not. Lacking broader insight, the GPS only considers factors programmed to be important. Ultimately, the algorithm paints an incomplete or distorted picture that can cause unsuspecting drivers to lose control of their vehicles.
Algorithms pervade our lives today, from music recommendations to credit scores to now, bail and sentencing decisions. But there is little oversight and transparency regarding how they work. Nowhere is this lack of oversight more stark than in the criminal justice system. Without proper safeguards, these tools risk eroding the rule of law and diminishing individual rights.
Currently, courts and corrections departments around the US use algorithms to determine a defendant’s “risk”, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date. These algorithmic outputs inform decisions about bail, sentencing, and parole. Each tool aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources.
Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or “black boxed”, meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs.
This lack of transparency has real consequences. In the case of Wisconsin v. Loomis, defendant Eric Loomis was found guilty for his role in a drive-by shooting. During intake, Loomis answered a series of questions that were then entered into Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections. The trial judge gave Loomis a long sentence partially because of the “high risk” score the defendant received from this black box risk-assessment tool. Loomis challenged his sentence, because he was not allowed to assess the algorithm. Last summer, the state supreme court ruledagainst Loomis, reasoning that knowledge of the algorithm’s output was a sufficient level of transparency.
By keeping the algorithm hidden, Loomis leaves these tools unchecked. This is a worrisome precedent as risk assessments evolve from algorithms that are possible to assess, like Compas, to opaque neural networks. Neural networks, a deep learning algorithm meant to act like the human brain, cannot be transparent because of their very nature. Rather than being explicitly programmed, a neural network creates connections on its own. This process is hidden and always changing, which runs the risk of limiting a judge’s ability to render a fully informed decision and defense counsel’s ability to zealously defend their clients.
Consider a scenario in which the defense attorney calls a developer of a neural-network-based risk assessment tool to the witness stand to challenge the “high risk” score that could affect her client’s sentence. On the stand, the engineer could tell the court how the neural network was designed, what inputs were entered, and what outputs were created in a specific case. However, the engineer could not explain the software’s decision-making process.
With these facts, or lack thereof, how does a judge weigh the validity of a risk-assessment tool if she cannot understand its decision-making process? How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant’s risk to society? Following the reasoning in Loomis, the court would have no choice but to abdicate a part of its responsibility to a hidden decision-making process.
Already, basic machine-learning techniques are being used in the justice system. The not-far-off role of AI in our courts creates two potential paths for the criminal justice and legal communities: Either blindly allow the march of technology to go forward, or create a moratorium on the use of opaque AI in criminal justice risk assessment until there are processes and procedures in place that allow for a meaningful examination of these tools.
The legal community has never fully discussed the implications of algorithmic risk assessments. Now, attorneys and judges are grappling with the lack of oversight and impact of these tools after their proliferation.
To hit pause and create a preventative moratorium would allow courts time to create rules governing how AI risk assessments should be examined during trial. It will give policy makers the window to create standards and a mechanism for oversight. Finally, it will allow educational and advocacy organizations time to teach attorneys how to handle these novel tools in court. These steps can reinforce the rule of law and protect individual rights.
Echoing Kranzberg’s first law of technology, these algorithms are neither good nor bad, but they are certainly not neutral. To accept AI in our courts without a plan is to defer to machines in a way that should make any advocate of judicial or prosecutorial discretion uncomfortable.
Unlike those truckers in Arkansas, we know what is around the bend. We cannot let unchecked algorithms blindly drive the criminal justice system off a cliff.