P2P

PeerToPeer_Spring_2026

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1544492

Contents of this Issue

Navigation

Page 59 of 109

60 on Thomson Reuters 2025 Generative AI in Professional Services Report & ILTA's 2025 Technology Survey, the legal industry is here for it; legal continues to embrace and expand its usage of AI. However, there is a catch: comfort and adoption with AI are rising faster than judgment maturity. I am a big advocate and user of AI, but I also recognize time-tested pearls of wisdom apply here: "Just because you can do something, doesn't mean you should do that thing." Or perhaps you prefer "with great power comes great responsibility." THE STAKES: WHAT IF THE MACHINE IS WRONG? The legal profession and industry are built on core principles of logic and rules. The intentional use of language and reasoning are central to the practice of law, and practitioners carry an important responsibility in serving clients. All attorneys know these tenets well, as they are woven into the fundamentals of legal training. IRAC, which stands for Issue, Rule, Application, Conclusion, is the foundational legal analysis and writing outline taught in law schools, and it demands attention to detail and clear congruence between facts and applicable rules. As service professionals, legal practitioners are also duty-bound to follow the rules of professional conduct and shape their practice in accord- ance with ethical standards. Adherence to these duties and principles is imperative for legal. They reflect the exercise of professional responsi- bility and the very essence of client service. They demand the autonomous exercise of professional judgment. They cannot and should not be delegated to AI. There is ample evidence that the legal profession and industry are aware of the dangers of not exercising proper professional judgment over the use of AI: • The oft-cited Mata v. Avianca, Inc., where attorneys were sanctioned for submitting hallucinated cases, emphasized that the duty of verification cannot be delegated. The issue was that professional judgment was not exercised; the core issue was not that AI was used, but rather that AI was not properly supervised. • Professional liability insurance is evolving, and related guidance cautions against lawyers' reliance on unverified AI outputs, as outlined in "From innovation to exposure: artificial intelligence risks for legal professionals." Failure to supervise AI could expose firms to malpractice claims. • Beyond Mata, courts are increasingly scrutinizing AI-assisted filings and requiring variable certifications. This is highlighted in "Which Federal Courts Have AI Judicial Standing Orders?" The message is not anti-technology; it is pro- judgment and verification. • The American Bar Association's (ABA) Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 in July 2024, emphasizing, among other things, that lawyers are required to develop a reasonable understanding of the capabilities and limitations of AI use as part

Articles in this issue

Archives of this issue

view archives of P2P - PeerToPeer_Spring_2026