Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1521210
68 P E E R T O P E E R : I L T A ' S Q U A R T E R L Y M A G A Z I N E | S P R I N G 2 0 2 4 know whether the prompt criteria and the data you are sending are used to train the model and whether any of that information or the output is "retained" by the model or the outside party that controls it. Some companies, like Microsoft, have a setting where they perform a task known as "abuse monitoring." This indicates that a human can access and review the data and the prompt criteria sent to the model. Determine whether the company you are evaluating is exempt from this setting since, otherwise, your client's data is not private. Data, prompts, outputs, and other information sent to aiR for Review remain within the Microsoft Azure boundary, and none of the data is stored or retained. The model is not trained by the inputs, data, or outputs but remains static. Relativity is also exempt from abuse monitoring, so no unauthorized persons have the right to review sensitive client data. In contrast, when using the publicly available large language model ChatGPT, there is a risk that sensitive client data could be left behind in the model when it is sent through it and, therefore, accessible to OpenAI and third parties. Hackers may also target publicly available models or be subject to other cybersecurity vulnerabilities. To mitigate data privacy and security risks, ask the right questions before sending sensitive information through a generative AI model. Companies should make it easy to understand where your data is going and how it is treated, readily supplying information about their practices and answering security and privacy questions with appropriate detail. What principles were used to guide the product's development? Many companies have released Gen AI principles intended to guide the development of Gen AI-powered solutions. While not a requirement, it is helpful to understand whether the companies you are dealing with have taken a responsible approach to Gen AI development. Of course, any entity can issue a set of principles – the real question is whether and how they adhere to those principles. Relativity's current set of AI Principles function, internally, as practical guidelines that govern our development of Gen AI- powered tools. Externally, they provide an example of what we envision as a responsible approach to developing Gen AI technology. You can read Relativity's most recent set of AI Principles here. Ask company representatives what their principles mean in plain language—they should be able to explain them and show how they are being applied in practice. If they have materials or content showing the principles in action, that can also help you evaluate whether they are genuinely incorporated in product development or just marketing fluff. It's also important to know your legal entity's values regarding AI development and whether that aligns with the technology provider's principles. F E A T U R E S "Data, prompts, outputs, and other information sent to aiR for Review remain within the Microsoft Azure boundary."