Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1521210
43 I L T A N E T . O R G AI budget. However, these larger firms are not deploying Gen AI tools without due diligence. They first rigorously review the tools, explore appropriate use cases, and focus on small, quick experiments to determine how the tech can be applied to their workflows and practice areas. Many law firms are taking advantage of pilot programs offered by solution providers to test the tools, collect feedback, and collaborate on use cases. Repeated testing, failing, testing again, and checking and re-checking the results is the only sure way for attorneys to get comfortable with the technology, build proficiency, and ultimately build trust with the tools. Tod Cohen, a partner at Steptoe, said it best when he likened learning about Gen AI to working with interns: "I look at it mostly through the lens of working with a smart intern, and the way you build trust with an intern is over time—there's simply no other way to do it," says Tod Cohen, a partner at Steptoe. "You can rely on all the assurances and assumptions you want, but just as when you are working with an intern until you trust them, you are always going to need to check and re-check their work, knowing that they are limited." Adopting Gen AI tools also requires learning new skill sets and changing behaviors. Interacting with Gen AI is a constant process – the more skilled and comfortable you become at developing effective prompts, the more accurate Gen AI will be at delivering the information or answers you need. To help with this, more than a third of Am Law 200 firms (38%) are planning to hire technologists and AI specialists this year. These roles might perform tasks such as data labeling. Other firms partner with third parties to provide the proper training or expertise. Building Trust Through Accuracy, Transparency, Privacy and Ethical Policies While education, testing, and practical experience are critical steps to building trust, trust is kept by delivering highly accurate Gen AI outputs, being clear and transparent about source origination, and ensuring data security and privacy. As we've seen, there is a genuine risk in using Gen AI tools that are not "professional grade" – i.e., not tailored for legal market use. To bolster confidence in the accuracy of AI-generated legal content, it is critically important to start with a large corpus of legal case law and restrict the model to using only trusted legal authorities to generate responses. This will minimize the chance of AI hallucinations and quickly build trust among attorneys, although the onus is still on them to check the integrity of the AI responses. To help with this, the Gen AI output needs to provide links to the source material, with appropriate source citations and references visible. One of the biggest concerns with Gen AI among attorneys is that their queries, prompts, uploaded documents, and responses might be used to train core "Adopting Gen AI tools also requires learning new skill sets and changing behaviors."