Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1472128
78 P E E R T O P E E R : I L T A ' S Q U A R T E R L Y M A G A Z I N E | S U M M E R 2 0 2 2 But getting to beautiful, interactive data visualizations isn't where the work ends. It is where the human's work begins. Now empowered by these novel AI- enhanced deliverables, our discovery professionals can become supercharged and more powerful than before. But for the AI work to provide value, you need experts who can follow the path of inquiry that led to your results, interpret your results, and determine if your results are meeting your requirements. A human is essential to deciding what needs to happen next, so that the insights delivered by any AI solution are incorporated into the bigger picture of a matter. This brings us to our next, and possibly most important, reality of AI. 3. Humans need to stay in the picture. Perhaps the biggest challenge with modern conceptions of AI is that they can downplay, if not ignore altogether, the fact that human judgment is still critical to training and tuning the models themselves. Leveraging the power of AI requires more than simply training people to use the software. Many considerations go into using AI in eDiscovery – you need to plan how you're going to train it, how you're going to measure your results, and how you're going to know what you're ultimately delivering with your AI product. Humans are required for all of that. Humans in the loop are not only needed to train AI, but to control for bias in the results. It's time to start discussing whether AI in legal discovery is aimed at replicating human judgments, which are imperfect, or at replacing humans and achieving perfection. Many people think of AI in terms of the latter, which is simply impossible. We need to be thoughtful about resetting our expectations and we must communicate clearly about what reasonably updated standards might look like. Technology will never be perfect. Even if it replicates human judgment, we need to remember and recognize that human judgment can be incorrect. Therefore, we need a way to check that, and humans need to be involved in that process. Practicing law has always involved a certain degree of judgment calls, particularly in areas like responsiveness and privilege, and lawyers need to be able to defend those calls. Simply saying you relied on AI might not be enough. You must be able to show that you double- checked your AI's decisions in a statistically sound, good faith way – this is as close to perfection as we're ever going to get and closer to perfect than humans can get without technology. In the case of technology-assisted review, tools can assess millions of documents and narrow a population down to the thousands that are most likely to be relevant, but you'll still need actual human judgment to understand how they're relevant to your case and strategy. This requires time and money and is a crucial factor in both the practice of law – ensuring favorable outcomes – as F R O M T H E T E C H S O L U T I O N S C C T "Humans in the loop are not only needed to train AI, but to control for bias in the results."