By Daniel Yoonsik Choi
There are many reasons to worry about judicial discretion, and developments in the psychology of judgment and decision-making cast doubt on the idea that sentencing is an art. For example, you might receive a harsher sentence from a judge if you appear in court later in the day. Could algorithms be better than judges? Perhaps in one respect: impartiality.
Impartiality is often associated with a disinterest, impersonal point of view or observer that is hypothetically free of all subjective biases. The earliest proponents of these views are David Hume (1740) and Adam Smith (1759). One dimension of impartiality is impersonal, meaning dispassionate or indifferent. For example, the good judge is impartial insofar as they are not swayed by emotions or weigh in personal considerations – for example, an angry judge should not deliver a harsher sentence to a defendant, nor should the judge deliver a more lenient sentence because the judge and defendant both enjoy jazz music.
Another related concept held as a virtue for a judge is “neutrality.” Thomas Nagel (with the help of Derek Parfit) can help us understand neutrality with the distinction between agent-relative and agent-neutral. The basic idea is that a reason for action is agent-relative if it makes some reference essential reference to a person, and it is agent-neutral if it does not. If I were a judge, I would act on agent-relative reasons if I delivered a harsher sentence because the defendant made me angry (since my anger is a reason for me but nobody else). In this case, acting agent-neutrally is to act in a way in which agent-relative reasons are yet to be specified. The relation between neutrality and impartiality is that neutrality is a necessary condition to impartiality, but neutrality on its own denotes a narrower idea of non-specificity.
Algorithms can be perfectly neutral because they are not subject to emotions or other physiological limits. Vincent Chiao suggests that algorithms can be used for sentencing in order to combat concerns of judicial arbitrariness and bias. The results would lead to greater justice by getting a bit closer to the ideal of proportionality in sentencing. That is, even if the algorithm is not perfect, it would do better than judges, especially with respect to racial bias. John Hogarth attempted something like this in the 70s and 80s, and it largely failed because judges trusted their own judicial discretion and intuitions over these algorithms. While there are legitimate concerns with introducing novel technologies and to block, technophobia should not be an impediment to a more just legal system.
Still, the worries related to taking the human element out of judgments have some substance. Leaving aside issues around implementation, one may wonder how impartial reasoning squares with theories of punishment. For instance, in morality, impartial reasoning is not always appropriate. William Godwin (1793) imagines a scenario where one must choose to either save a chambermaid or Fenelon (the archbishop of Cambrai) from a fire. From an impartial standpoint, the clear outcome would be saving Fenelon, since he benefits thousands with his works; even if the chambermaid was one’s own wife or mother, the choice would be the archbishop over one’s own wife or mother. This seems like a morally repugnant result.
While there are a number of issues around implementing algorithms to assist the judiciary, there is clear potential for addressing access to justice issues. For example, predictable sentencing outcomes could level the playing field in negotiations between the Crown and accuse, it could increase efficiency for judges, and it could assist lawyers in building a case. Professor Benjamin Alarie is already involved in a company which uses “AI-powered platforms accurately predict court outcomes and enable you to find relevant cases faster than ever before.” With virtual hearings at the Supreme Court of Canada, I am optimistic about the next steps in operationalizing legal technology.