P2P

winter23

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1515316

Contents of this Issue

Navigation

Page 78 of 94

79 I L T A N E T . O R G encourage the possibility of automation. Further, law firms (particularly large ones) may be able to draw on masses of prior work product that could be used to tune general purpose LLMs to the particular needs of the firm. Challenges in Applying LLMs in Legal Work The combination of the enticing potential of Gen AI in document creation and the fact that conversational interfaces make experimentation with Gen AI available to a broad audience has led some commentators to predict very rapid changes in the practice of law. Despite the excitement; however, Gen AI is still a software technology and faces all the usual obstacles of integrating new technologies into legal work. A few moments remembering your firm's last integration of a significant new piece of software is a useful antidote to social media posts about how utopia or doom for attorneys is around the corner. Predictions that Gen AI improvements to search, question answering, and summarization will lead to radical changes in the efficiency of legal work are likely overblown. The arguably more impactful transition from hardcopy law books to sophisticated search engines for case law occurred within the lifetimes of many attorneys still practicing, with no clear impact on the cost of legal work to clients. The need to bring to bear legal expertise and experience in interpreting and applying the output of any information access system and the fact that any sufficiently compelling technology will be used by both sides in our adversarial legal system limits the net economic impact of any information access technology. The potential impact of Gen AI is larger in document creation, but here, too, the path is not trivial. Gen AI's ability to synthesize language brings with it the ability to create language disconnected from reality, a phenomenon referred to as hallucination or confabulation. The now- notorious submissions by hapless attorneys, including ChatGPT-generated fictitious legal citations, are an extreme but useful warning tale. Less obviously, LLMs trained on vast corpora of texts incorporate both societal biases about people and groups, and lay misconceptions about the law. Not only will attorneys need to verify any factual information present in LLM-produced text, they will be need to develop the framing and emphasis of those facts, to understand how to apply the law to those facts, develop persuasive themes and arguments, and avoid biases (not to mention potential copyright infringement) that may creep in via LLMs. These aspects of professional legal practice will not be displaced by AI any time soon. It is also easy to underestimate the costs of cleaning up an initial draft created by an LLM. A lesson learned early in the history of machine translation is that manual work to clean up an "almost ready" machine-generated draft can be greater than the work necessary to write good copy from scratch. This danger is even larger for the creation of legal documents than for language translation, since most legal writing is deeply grounded in facts about the law, cases, and aspects of reality. This does not mean the LLMs will be useless in document creation, far from it. It does mean that the choice of where and how to apply LLMs must be made carefully. Using LLMs to make document drafting more efficient will also require careful examination of and data gathering on existing legal document creation workflows. Understanding where actual costs are and where technology can plausibly reduce them will be critical. Economically valuable applications of LLMs will likely look less like the conversations with a clever assistant implied by enthusiastic ChatGPT blog posts and more like the use of traditional software with structured user interfaces and rigorous workflows. (Sorry.) For instance,

Articles in this issue

Archives of this issue

view archives of P2P - winter23