P2P

Spring24

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1521210

Contents of this Issue

Navigation

Page 43 of 74

44 P E E R T O P E E R : I L T A ' S Q U A R T E R L Y M A G A Z I N E | S P R I N G 2 0 2 4 models or otherwise be visible to others. To alleviate confidentiality and legal privilege concerns, ask solution providers about the security of their hosting environments and how they protect data privacy. The top providers will ensure that their hosted Gen AI solutions exist within a "walled garden" from which data cannot escape and that sensitive data is never stored beyond the current user's session or used to train core models. Another step would be to remove data-sharing clauses from vendor contracts – a provision that some companies include in their terms and conditions because sharing data will improve their service. The problem is that Gen AI tools are data intensive and constantly require new data to improve accuracy and relevance. During our recent discussion, Cohen quipped that LLMs are "like ravenous plants from The Little Shop of Horrors – they always need to be fed. For a law firm, that's a question of what are we feeding the tools with and how do we ensure that the tool isn't being fed with confidential and proprietary data and then reused by other clients and potentially by other downstream users inside and outside of the firm? That's the most difficult part." Building trust also means developing and using Gen AI tools ethically and responsibly, with clear expectations for firm employees backed by responsible AI principles, guidelines, and policies that govern how the tech can be used. Internal policies may be influenced by clients who have AI guidelines about what is permissible and what they expect firms to disclose regarding AI use. Many in-house legal teams request disclosure about law firm Gen AI use and policies when issuing RFPs. Some clients expect law firms to use Gen AI to be more productive and lower the cost of legal services. Still, they also want assurance that their information is protected and that law firms disclose it whenever they use Gen AI to produce legal work products. Creating an AI Playbook that incorporates client requirements and internal processes can guide how Gen AI should be tested and experimented with before it is applied to legal workflows and activities. Policies should include sections on maintaining client confidentiality, ethics, communication (internally and externally), errors (check everything in-firm and verify from independent legal sources), and terms of use. This last one is crucial—if you're using a Gen AI tool, review its terms for red flags, such as an obligation to indemnify the provider or the platform or who owns the content—OpenAI has already been sued for copyright and trademark infringement. It has been cautioned that being overly restrictive will cause lawyers to skirt the rules and use off-the-shelf (i.e., not professional-grade) Gen AI tools. The answer to this could be as simple as using the ethics rules you already have in place for any technology in the workplace F E A T U R E S "It has been cautioned that being overly restrictive will cause lawyers to skirt the rules and use off-the-shelf Gen AI tools."

Articles in this issue

Archives of this issue

view archives of P2P - Spring24