Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1521210
51 I L T A N E T . O R G T raditional AI has been limited to tasks like summarizing text, answering questions, or translating, all done through pre- programmed algorithms. However, the emergence of Gen AI, powered by neural network algorithms trained on vast data sets, has revolutionized the field. This new breed of AI can learn patterns, adapt, and even generate new content, making it a game-changer for e-discovery workflows. While scale, adaptiveness, and generation represent three core differentiators of Gen AI, the most critical difference lies in its skin–in its groundbreaking UI. That UI, made famous by OpenAI's widely used Chat GPT, features a simple- to-use conversational facade, a familiar surface through which users can experiment with the underlying Large Language Model (LLM). For years, we've been primed by AI-based chatbots and assistants like Siri or Alexa; OpenAI transferred that knowledge to a newer, more complex AI product to shorten the learning curve. LLM systems, like ChatGPT or Google's Gemini, don't require this kind of conversational interface––it was primarily a marketing choice in a (successful) attempt to onboard a broader range of users and grow interest in the tool. Gen AI, with its user-friendly interface and time-saving capabilities, has found its way into various industries, including the legal sector. Legal teams are leveraging Large Language Models (LLMs) for tasks like document summarization and draft generation. However, the legal profession, known for its cautious approach, has been slow to fully embrace this transformative technology. Interpretability Will Drive Adoption in Legal The hesitation in adopting Gen AI in the legal field likely stems from concerns about ethics and defensibility. Gen AI is built on extensive data scraping, which raises privacy, security, and bias risks. Moreover, implementing a new tool like Gen AI is not without its challenges. It can be time-consuming and expensive, often requiring significant workflow overhauls to address these risks. This resistance is not entirely surprising. Similar concerns arose when TAR entered the scene in the early 2000s to help expedite document review. Historically, review required humans to go through each piece of electronically stored information (ESI) one by one using agreed-upon search terms to sort documents for relevance and privilege, a costly and time- consuming process. With TAR, lawyers input a sample of documents and classifications to train the algorithm that sorts, categorizes, and ranks ESI relevant to the case. Research shows that TAR is dramatically faster than human-only review and is usually more thorough and accurate. Still, until 2012, when U.S. courts officially sanctioned the use of TAR in e-discovery (and even after), lawyers were slow to embrace it. Like Gen AI, the elevated risk and weak defensibility stalled adoption. Introducing TAR into e-discovery also demanded heightened attention and significant resource allocation towards validation techniques. Proposed validity frameworks for TAR algorithms challenged many legal teams. Lawyers are not mathematicians or engineers; adding statistical analysis to their already complicated e-discovery workflows likely felt difficult when most lawyers were comfortable with search. TAR is now widely considered "black letter" law and is commonplace in legal work. Gen AI is positioned to get there, too––and may get there faster despite the current skepticism. TAR was built on traditional AI, a black box with limited interpretability. Gen AI, on the other hand, offers more transparent and understandable decision- making processes. For example, during document review, lawyers can ask the system why it flagged a document as responsive. The best interpretable neural networks will 51 I L T A N E T . O R G