P2P

winter23

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1515316

Contents of this Issue

Navigation

Page 76 of 94

77 I L T A N E T . O R G T he past year has seen an explosion of interest in generative artificial intelligence (Gen AI) technologies in the law. At Redgrave Data, we are fielding questions weekly from law firms, corporate counsel, and others about Gen AI. These questions often include:. • What is Gen AI anyway, and is it as big a deal as the hype claims? • What legal tasks can it be applied to? • What risks and challenges are associated with its use? • How can we get started with using Gen AI? In this piece we provide an introduction to this exciting technology, and some guidance in navigating the hype around it. What is Generative AI? Artificial intelligence (AI) is an umbrella term referring to computer technologies for accomplishing tasks that are usually viewed as requiring human intelligence. As AI has progressed, the tasks that can be accomplished have become increasingly complex. However, the methods that AI systems use to accomplish these tasks are very different from the way the human brain works, and there is still a very large gap between AI capabilities and human intelligence. So don't worry about Skynet or about being completely replaced by an AI system just yet. While a wide range of AI technologies have been explored since the 1950s, the most successful until recently have been discriminative AI technologies, where AI is trained to predict values associated with data, e.g., what category a document belongs to. Discriminative AI has been deployed in a wide range of applications, from the mundane (suggesting songs based on your listening habits) to the controversial (face recognition, resume screening, and other applications where AI predictions about people lead to valid concerns about bias). A classic example of discriminative AI is classification: what bucket should each set of documents (images or customer records) be put into? One way to think about discriminative AI is that it takes a complex input (e.g., the natural language in a document and that document's metadata) and provides a simple output, such as a simple thumbs-up or thumbs-down on a measure like "responsiveness," or tags that indicate what sort of objects are present in an image. By contrast, generative AI produces complex rather than simple outputs (hence the name). AI has long attempted tasks with a generative aspect, such as producing summaries, answering questions, or translating among languages. But it has only been in the past few years that generative AI systems have for the first time been able to synthesize high quality documents, images, and other complex outputs reliably in response to user needs. These new capabilities have resulted from a combination of advances in deep neural network algorithms, the deployment of highly specialized computer hardware, and training on data sets of unprecedented size (e.g., billions of documents or images). Gen AI systems typically take as input some form of prompt – such as natural language descriptions or examples of the outputs that are desired. Prompting Gen AI can be as simple as a user providing a written description of the output they want (e.g., a paragraph on a particular topic or an image portraying particular objects). To produce useful output, however, users will often find that they need to engage in some "prompt engineering" – refining their request by specifying details, adding examples, or providing additional context ("context stuffing") to guide the model in the direction the user wants.

Articles in this issue

Archives of this issue

view archives of P2P - winter23