P2P

PeerToPeer_Spring_2026

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1544492

Contents of this Issue

Navigation

Page 94 of 109

P E E R T O P E E R M A G A Z I N E ยท S P R I N G 2 0 2 6 95 "Prompt fear" refers to anxiety about GenAI based on the belief that users must perfectly phrase inputs to get reliable results. For an example steeped in stereotype, consider a law firm partner blaming an associate for misinterpreting instructions or drafting a brief incorrectly. That deflection of responsibility goes away when it is just end users prompting the tools. The ethical duty of competence can also act as a double-edged sword. American Bar Association Model Rule 1.1, Comment 8, adopted in some form by at least 40 states, recognizes the effect technology has on the practice of law and the importance of understanding its benefits and risks. Some attorneys may interpret that to mean if they do not thoroughly understand new technology, they should avoid it altogether. But that interpretation is a long way from best practice. The more productive approach is education. Users should be trained to understand potential AI pitfalls -- accuracy, data privacy, bias, intellectual property rights, and the like -- and provide acceptable use policies that keep the AI train on track. Unfortunately, the Clio 2025 Legal Trends Report indicates that 53% of legal professionals say their firm either has no AI policy or are unaware of one. Basic comprehension of AI tools' modus operandi allows organizations to establish proper guardrails required to unlock immense capability without running afoul of competence and confidentiality concerns, even if they do not fully understand the "magic" inside the AI black box. For example, do not upload sensitive client data into free, publicly available GenAI tools that permit data sharing. Instead, use enterprise-grade tools with contractually-based confidentiality guarantees. And regardless of the tool, always check AI output. WHERE IS THE DATA? The effectiveness of AI depends on the quality, structure, accessibility, and governance of underlying data. Successful AI adoption requires successful data management. Not surprisingly, "good data in, good data out" is a familiar concept in legal technology. One challenge is that the value of proper data management often becomes apparent only after the work is completed. As a result, many organizations are now exploring Legal Data Intelligence ("LDI") as a framework for addressing complex data challenges across use cases while reducing data silos. Several LDI principles are critical to AI adoption. Data organization can make or break a company in terms of efficiency, security, and cost. If you do not know what data you have, you cannot use it to your benefit, much less keep it secure. Overarching data management principles include: 1) Connecting data from identified sources 2) Standardizing data to enable analysis 3) Keeping the end goal of understanding data in mind Given the immense data diversity in the digital world, data management is inherently complex. Data mapping and taxonomy must be approached in a systematic way. That includes data stewards developing a hierarchical classification system that structures information to facilitate efficient searching and retrieval of knowledge. There are a few practical lessons to keep in mind. When mapping data, start with high- level categories such as email, document repositories, and AI content. Every file cannot be cataloged because data sources and requirements change constantly. Also consider explicit versus implicit knowledge. AI can only analyze written records, not institutional knowledge stored in someone's head, so organizations should plan how to capture and centralize that knowledge. Finally, consider what types of information require heightened protection to maintain data privacy and regulatory compliance.

Articles in this issue

Archives of this issue

view archives of P2P - PeerToPeer_Spring_2026