Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1544492
62 EARLY BUILDING BLOCKS FOR AN AI JUDGMENT FRAMEWORK While there is variation in how organizations are supporting the development and maturation of judgment skills, below are some common threads that have been emerging. 1 Teaching AI Literacy and Limitations through Scenarios. Beyond helping legal professionals to understand the basics of AI and the pitfalls to be on the lookout for, such as hallucinations, bias, and outdated training data, programming presents users with realistic scenarios where AI outputs are plausible but may be wrong. This requires learners to criti- cally assess and verify, rather than passively accept outputs. Harvard Law School even offers an "AI and the Law" executive education program where scenario-based roleplays are used to navigate AI-fueled dilemmas. 2 Ethical Reasoning Baked In. As previously mentioned, attorneys have ethical obligations tied to their use of AI, particularly in relation to competence, confidentiality, communica- tion, and supervision. Programs have seen success embedding these topics into scenario-based training materials. The key is to not treat ethics as a separate compliance module, but instead treat it as an organizing frame- work for judgment calls attorneys must make in their AI use. 3 Risk-Tiered AI Use Cases. Not all uses of AI carry the same risk. Differentiating these use cases can help to heighten risk awareness, while avoiding arbi- trarily dampening the prospective value of AI for lower-risk tasks. Borrowing from ediscovery and technology-assisted review defen- sibility frameworks, organizations can generally categorize AI use cases and scale human oversight according to risk. For example: Low Medium High Ideation Drafting Filling or Regulatory Submissions 4 AI Reliance Checklists/ Practice Resources. Workflows that incorporate the use or reliance on AI should also integrate protocols to facilitate baseline Human-in-the-Loop verification on proper usage. This step is analogous to citation checking, due diligence checklists, or document review quality control measures. AI output review is a skill to be developed, not a formality. Modeling from guidance emerging from judi- cial orders and bar opinions, these checklists might include queries such as: • Have all citations been independently verified? • Are assumptions factually supported? • Does this output affect client rights? • Would disclosure be required or prudent? BRENDAN MILLER Brendan W. Miller, J.D., is a curious attorney, technologist, strategist, innovator, and change agent with nearly two decades in the legal industry. To Brendan, legal innovation is about continually being relevant for clients by making the business and practice of law easier, better, and more valuable.

