Digital White Papers

LPS23

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/1514711

Contents of this Issue

Navigation

Page 13 of 22

I L T A W H I T E P A P E R & S U R V E Y R E S U L T S | L I T I G A T I O N & P R A C T I C E S U P P O R T 14 I L T A ' S 2 0 2 3 L I T I G A T I O N & P R A C T I C E S U P P O R T S U R V E Y R E S U L T S Large Language Models: Risks, Limitations, & Considerations by Ann Halkett Large Language Models (LLMs) hold a LOT of promise in terms of what they can and will be able to do in the eDiscovery space. Software developers are rushing to get the latest LLM functions and features to market. Like any new tool, consideration should be given to the best use for the tool's functions and features, what are its limitations and possible issues, and, most importantly, what are the risks. So, what do you need to consider? As an eDiscovery professional, you should be able to explain how the tool works to your lawyers, clients, and possibly even the court. The following lists may provide a starting point. Risks Several risks are apparent and include: • Hallucinations – LLMs guess when they do not have enough data on which to make decisions so how has the software developer accounted for this? What features have they built into the tool to account for hallucinations and what testing have they done to reduce and/or make sure there are no hallucinations? Will they share the testing results with you? • Confidentiality and Privacy – What privacy and other measures are built into the tool and what is it doing with the data? Does the data leave your instance and go elsewhere? If so, where does it go and what happens to it? • Ethics – When using the LLM how does it access the data set? Does it point to just one workspace, all workspaces, and/or more? What happens if you act for more than one party in a matter and have separate databases for each? Will there be data leakage? • Bias – While the LLM model may be pointed at your dataset, bias may be inherent in how the model works. This bias could also be in a pre-trained prompt. • Subject Matter Expert – The person using the tool needs to be an expert with knowledge of what the tool is generating. Is the information the LLM tool generates complete? What is missing and/ or incorrect? Ignorance is not bliss – ignorance is negligence. • Reputational Damage – We have already seen instances in the press where individuals did not use LLMs correctly and received negative press attention.

Articles in this issue

Archives of this issue

view archives of Digital White Papers - LPS23