Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1227987
40 P E E R T O P E E R : I L T A ' S Q U A R T E R L Y M A G A Z I N E | S P R I N G 2 0 2 0 • Asked that the 1st year associates compute the time savings associated with using AI Software for review. Cost/Benefit Analysis of Training AI Software: At the end of the 8 week review project, I gathered data from the 1st year associates who participated. There is a bit of a learning curve on a project like this and each associate said that the first couple of Guaranties took more time to process because these were new concepts and classifications that they had not worked with before. It took each associate approximately 1 hour to review each Guaranty without the AI Software. Once the 1st year associates started using AI Software, each of them reported a 50% time savings relative to the Guaranties they reviewed without it. They also said AI Software helped serve as a "gut check" to confirm the presence or absence of certain Bad Boy Acts. It took me 50 hours to train AI Software on the various Bad Boy Acts. Going forward, this project will require our 1st year associates to review 60-100 Guaranties per year. The AI Software has cut their review time from 60-100 hours to 30-50 hours. Consequently, in the first year of deploying AI Software on this project, the time saved by our timekeepers will likely equal the time I spent to train the AI Software. I consider this AI Software training project to be a major success story. Challenges: There were some additional challenges in training the AI Software beyond the item previously discussed herein: • The precision and recall numbers I settled on for this project are much lower than the precision and recall numbers an AI Software vendor would approve when deploying a particular clause that it trained. Consequently, AI Software missed certain Bad Boy Acts or misclassified others beyond what you would see on a vendor-trained provision. This is not surprising given the limited nature of the data set used to train AI Software and even fewer examples of the more obscure Bad Boy Acts; • For each Bad Boy Act, the market study looked at (i) whether the Guaranty contained the Bad Boy Act and (ii) if so, whether the Bad Boy Act triggered (x) full recourse liability for Guarantor or (y) a claim for lender damages. It was very easy to train AI Software to recognize each Bad Boy Act, however, I never found a way to efficiently get AI Software to classify the language into (ii)(x) or (ii)(y). Conclusion: When I started this project, I truly did not know how much time savings our 1st year associates would report based on using AI Software to facilitate the market study. I was genuinely surprised at the 50% time savings reported by the 1st year associates who worked on this project. Given the time it took for me to train AI Software relative to the time savings reported by our 1st year associates, I found this project to be a great use case for training AI Software. ILTA F E A T U R E S Hunter Jackson is in charge of Transactional Knowledge Management at Sidley Austin LLP and is a national leader in legal knowledge management and technolo, recognized for creating innovative solutions to enable lawyers to better capture, organize and share their collective knowledge, identify expertise, price and manage matters.