Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1502513
17 I L T A N E T . O R G Another critical component of defining your "how" is figuring out how you will regulate and control GAI use to protect your business. Once you have your use cases and workflows in place, you can then put clear guardrails up, defining the limits of use and methods for accountability. Until AI systems can offer 100% accuracy (which will be never), every organization's "how" should include a method for validating outputs. Current GAI models have no verification mechanisms built in. To validate results and reduce risk, you'll need a knowledgeable person who understands the larger legal context as well as client goals. Similarly, until accuracy and data privacy improve in AI, you'll need to put a process in place to audit your automation. Auditing ensures that the tool is working the way you want it to and can help you detect privacy threats. Plus, it can help you keep your policies up to date as AI inevitably evolves. You should also put clear compliance and control policies around the tool and create procedures for accountability that prevent employees from misusing the tool outside of the defined workflows and use cases. Consider offering learning and development around GAI to ensure employees use it correctly and understand the dangers of misuse. The last step in defining your "how" is to set up a pilot process with a controlled group of users. Trial periods provide a space to test and refine workflows before opening them up to the larger company, assess monitoring and auditing procedures, and identify risks that might not have been considered yet. A successful rollout and adoption plan with an intentional pilot period will ensure the company gets the most out of GAI and the overall investment over the long haul. The Future of GAI in the Legal Industry Until recently, most LLMs pulled from static databases with outdated information. Recently, however, ChatGPT integrated with the "entire Internet," accessing third-party sources and databases on the web. This move will likely solve the static data issue but worsen bias, misinformation, and privacy concerns. This move also represents why–in its research phase–GAI is so difficult to regulate, and why industry standards have yet to materialize. It's a fluid situation, and until GAI moves out of the research phase and into monetization, it will be up to companies to adjust and protect themselves against these experimental changes by setting up custom guardrails via the process outlined above. Interestingly, some legal tech companies seeking to capitalize on interest in AI are already working to address the regulatory and standards gaps that are most likely to affect the legal industry. For example, there are some companies that are amending current AI models to solve concerns around accuracy, bias, and nuance. "This move will likely solve the static data issue but worsen bias, misinformation, and privacy concerns."