Henry Taylor speaks with ICA4 Fellow Michael Livermore, the Edward F. Howrey Professor of Law at the University of Virginia School of Law. Livermore is also the Director of the Program in Law, Communities and the Environment (PLACE), an interdisciplinary program based at UVA Law that examines the intersection of legal, environmental, and social concerns.
The podcast begins with Livermore speaking about his specialty areas, how he incorporates his interest in artificial intelligence and machine learning into his legal research, and how those interests fit within the legal field (:50 – 2:41). This leads to a conversation about the potential uses of AI in the context of the legal system, with Livermore providing an extended example of using machine learning algorithms in the criminal justice system. Livermore explains that AI and machine learning may help in determining flight risks, thereby potentially reducing the likelihood of detaining individuals unnecessarily, while also signaling the potential negatives of relying too heavily on these kinds of algorithms. (2:45 – 16:03). Expanding on this while continuing the hypothetical of the flight risk, Taylor introduces the complication of causal factors, with Livermore pointing out that relying on some causal factors, such as those related to bio-determinism, might be highly problematic (16:09 – 20:03). The conversation then focuses on the use of artificial intelligence in the context of making legal decisions, with Livermore explaining why there is a strong argument in favor of AI while also acknowledging that there are practical reasons for leaving legal decision-making power with humans (20:12 – 28:40). Picking up on one of the justifications Livermore provides for preventing full legal automation, Taylor asks Livermore to expand on why, exactly, we believe that individuals have a right to an explanation for a judicial decision (28:45 – 34:06). This leads to a discussion of AlphaGo, Move 37, and whether machines which are capable of learning are also capable of providing reasons for their decisions. Livermore compares this to the demands placed on contemporary artists to explain their work, or chess masters who play moves instinctually (34:12 – 46:42). The conversation then shifts to a focus on indeterminacy in the law and the artificial intelligence of natural language processing. Livermore states that AI has advanced to the point that some linguistic indeterminacy can be accounted for by the algorithm, but that there are invariably going to be errors, and we must establish what we are willing to live with in this regard. As an example of this, Taylor brings up the case of the person who sued his parents for giving birth to him, which Livermore describes as a perfect illustration of where a machine may fail. This, in turn, leads to a discussion of jury nullification, the controversial practice of a jury refusing to convict an individual who would otherwise be found guilty because the jury believes the law in question is unjust (46:50 – 1:00:14). The conversation ends with a discussion of whether artificial intelligence could be programmed to respond with empathy, and the significance of empathy in the legal system more generally (1:00:25 – 1:07:00).
The Intercontinental Academia (ICA) is a global network of future research leaders sponsored by the University-Based Institutes of Advanced Studies. The ICA4 explores the complementarities between artificial intelligence and neuro/cognitive-science.