Digital White Papers

Professional Services: Building Relationships

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/535467

Contents of this Issue

Navigation

Page 26 of 44

ILTA WHITE PAPER: JUNE 2015 WWW.ILTANET.ORG 27 BEST PRACTICES FOR WORKING WITH TAR While TAR 2.0 is exceptionally good at some tasks, it is not great at others. For legal professionals, part of the challenge is knowing which tasks to transfer to this technological team member while still maintaining control over the overall process. Pre-TAR workflows assume human decision- making at every stage, which means these workflows require thoughtful adjustments to make best use of newer technologies. As a starting point, it is helpful to separate tasks into those that humans do best (such as reading for comprehension or making relevance judgments) and those that machines do best (such as recognizing patterns across large volumes of data). TAR can assist legal professionals in decision- making about data, but legal professionals must still make final decisions. Implementing best practices can help you achieve optimal output from TAR and make optimal use of human judgments. Some examples of best practices include: • Intelligent reuse of expensive human judgments • Storing only one measure per data field (e.g., whether a document is responsive, as opposed to whether it is subject to production) • Coding documents on a document level, not a family level (the machine can't tell whether you intended "responsive" to mean "responsive" or "subject to production") TAR USE BY TASKS Let's look at the kinds of tasks we face in e-discovery. Broadly speaking, document review tasks fall into three categories: • Classification: The most common form of document review in which documents are sorted into buckets, such as responsive or non- responsive • Protection: A higher level of review in which the purpose is to protect certain types of information from disclosure (the most common example is privilege review) • Knowledge-Generation: Learning what stories the documents tell and discovering information useful to our case Although TAR is helpful for all three, each has different metrics for success. Those metrics have important implications for designing your workflows and integrating TAR. Recall and precision are two crucial metrics for measuring the effectiveness and defensibility of TAR processes. Recall is a measure of completeness, the percentage of relevant documents retrieved. Precision measures purity, the percentage of retrieved documents that are relevant. The higher the percentage of each, the better you have done. If you achieve 100 percent recall, you have retrieved all the relevant documents. If all the documents you retrieve are relevant, you have achieved 100 percent precision. But recall and precision are not friends; typically, a technique that increases one will decrease the other. The three categories of document review tasks have different recall and precision targets, so choose and tune your workflows for each to maximize effectiveness and minimize cost and risk. CLASSIFICATION TASKS: When using TAR for document production, classify documents so you can do different things — such as review, discard or produce — with subpopulations. The goal of using TAR is to get better results, not perfect results. You want to achieve reasonably high percentages of recall and precision, but at levels of cost and effort that are proportionate to the case. A goal of 80 percent recall (a common TAR target) could be reasonable when reviewing for responsive documents. Precision must also be reasonable, but requesting parties are usually more interested in making sure they get as many responsive documents as possible. Recall usually takes precedence. PROTECTION TASKS: When your task is to protect certain types of confidential information, you need to achieve 100 percent recall — nothing can fall through the cracks. This is problematic in practice. To approximate perfection, you need to adjust the workflow to use every tool in your toolkit — not just TAR, but also keyword searching and human review — to identify the documents that must be protected. WHEN MACHINE INTELLIGENCE JOINS YOUR PROFESSIONAL SERVICES TEAM

Articles in this issue

Links on this page

Archives of this issue

view archives of Digital White Papers - Professional Services: Building Relationships