P2P

Spring24

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1521210

Contents of this Issue

Navigation

Page 74 of 74

75 I L T A N E T . O R G Navigating "Hallucinations" and Data Security As noted above, AI is only as good as its training. AI and machine learning depend on the quality, objectivity, and size of the training data used to teach it. The quality of the output is influenced by the quality of the input—or, in simpler terms, think of the adage "garbage in, garbage out"—and incorrect or incomplete data fed into the system can result in inaccurate information being returned. Though AI models are constantly improving, they still can make mistakes and produce incorrect answers via "hallucinations." A hallucination is when an AI model generates incorrect information but presents it as a fact. These can occur for various reasons, including training the AI on outdated, inaccurate, limited datasets or poorly worded user prompts. Therefore, it's essential to confirm with your AI provider that they have plans to mitigate hallucinations, including training on high-quality data, simplifying prompting techniques and the ability to provide feedback to the AI-powered tool. AI can learn, adapt, and make informed decisions by collecting, processing, and analyzing vast amounts of "good" data. From a legal perspective, training the AI on legal data and industry-specific research and materials and using legal-specific AI tools, those developed and trained uniquely for the legal sector, can help ensure that the data and machine-learning algorithms used are tailored to the industry's needs. Security must always be a chief concern because the evidentiary materials being reviewed are sensitive. AI models are trained on a wide range of data, and it's important that your provider maintains a robust security framework and data governance protocol to ensure that confidential or sensitive information is not exposed. A Competitive Edge With its ability to analyze vast amounts of data quickly and accurately, AI can help save time and money while ensuring the accurate analysis, interpretation, and presentation of digital evidence. Though legitimate concerns exist and must be addressed, legal professionals and firms who fail to educate themselves and responsibly implement technologies like AI/Gen AI do so at their own peril. Investing in secure technologies and working with trusted partners, attorneys, and legal professionals can leverage AI's growing role in digital evidence collection and analysis, unlocking new possibilities, generating unique insights, and giving them a competitive edge over those who don't. ILTA Kaci Hardin is the Legal Quality and Delivery Manager at Verbit, a global leader in AI- and human generated transcription solutions. She is a certified court reporter with a rich background in management across various transcription fields, with expertise in legal technology and operations. At Verbit, she designs QA processes and ensures the delivery of accurate transcriptions of all legal matters - from depositions to digital evidence. Additionally, she is a member of the Advocacy Committee of the American Association of Electronic Reporters and Transcribers (AAERT).

Articles in this issue

Archives of this issue

view archives of P2P - Spring24