KIERAN NEWCOMB

Kieran Newcomb

This summer I received a Summer Undergraduate Research Fellowship (SURF) from the Hamel Center for Undergraduate Research. As a philosophy major, my project—which I will turn into my senior honors thesis over this next school year—is on the use of artificial intelligence (AI) in judicial deliberation.  

This was a topic developed as a union of my personal interest—the development of artificial intelligence—and my desired career in the legal field. Technological innovation is happening so rapidly that there are a vast number of topics I could have chosen, but I plan to be an attorney so I figured I would research something that might be directly applicable to my career.  

Because of the nature of this research, I was able to do nearly my entire summer fellowship from home while meeting with my advisor, Professor Nick Smith, via Zoom. I used the UNH library’s online database to compile a long list of legal research articles and spent the summer working through them, taking notes, and discussing them with Professor Smith.  

I read entire issues of scholarly journals like The Judges Journal and Criminology & Public Policy in order to learn how criminological statistics without AI are used to estimate chances of a given defendant’s recidivism—the tendency of a criminal to reoffend. Judges use this information to help them come to a decision about sentencing. For example, if the predictive algorithm says that a defendant’s chances of committing more crimes is quite high, a judge might use this to sentence them to a longer prison term. If the program suggests that the defendant is unlikely to commit future crimes, the judge might sentence them to probation or parole, or even dismiss the case entirely. The fact that judges in all fifty states formally use this information in their deliberation came as quite a surprise! 

I then looked at how the implementation of AI technology will transform this process. While there are a myriad of newspaper and magazine editorials describing how judges might hand over the decision-making reins to machines, I focused mostly on scholarly articles that describe the particulars of how AI will be implemented into these statistical programs. The process is much less dramatic than much of the media coverage makes it seem. As our technology improves, scientists develop predictive programs that can handle more complex variables. For example, whereas an old program might provide an assortment of graphs with “thresholds of predicted criminality” based on binominal relationships (age and neighborhood, or level of education and number of close friends with no criminal record), new programs are able to provide a single, complex graph with all relevant variables. AI technology then enables these programs to modify their own algorithms as they learn from their mistakes and more data becomes available, which increases their predictive power. 

Studies show that these programs will almost certainly become more accurate at predicting future behavior than humans, inviting the possibility of eliminating human bias from an incredibly high stakes endeavor. Of course, the data that these statistical programs learn from—previous judicial decisions—is already riddled with human bias. This is one of the major objections to the use of AI in the legal field.  

My SURF project will culminate in a philosophical paper exploring the opposing sides of my research question and explaining complex statistical modeling in terms that are easily accessible to the layperson. The first question I will address is whether we should be doing this in the first place. AI will make this process more accurate and efficient, but should we be using statistical programming in judicial decisions at all? Is there something wrong about reducing humans to their biopsychosocial data? Should we allow technology to play such a critical role in deciding the fates of humans? (Cue the 2002 hit movie Minority Report) My paper will begin with an exploration of the ethical landscape of how judges use predictive coding—with and without AI technology—to make sentencing decisions. I want to make clear the benefits and drawbacks of both sides of this debate, since people tend to have very strong reactions to using AI in such high-stakes decisions. I will then turn to my argument, in which I argue that we should have judges use predictive coding and AI technology to inform their sentencing decisions. When I turn this into my senior thesis, I will then defend this argument to the philosophy department. 

Regardless of whether people agree with my argument, one of my goals is to get UNH students—and New Hampshire citizens more broadly—discussing whether we want this used in our state’s criminal justice system. These AI programs are already used by judges in six states, so it’s only a matter of time before it makes its way to New Hampshire. Professor Smith and I will work to publish some of these papers in both academic journals and state newspapers or magazines.  

One of the really cool things I’ve done is present my preliminary research at the Undergraduate Research Conference (URC) this past spring and to several of Professor Smith’s classes. Now that I’ve done much more research, I will give fully developed presentations at the Northern New England Philosophical Association Conference at Dartmouth College this fall, the URC next spring, and hopefully a national conference.