P2P

Summer22

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1472128

Contents of this Issue

Navigation

Page 76 of 92

77 I L T A N E T . O R G I n the mid- to late 2000s, when predictive coding was at the cutting edge of legal technology, the conversation focused on replicating human judgments. Early promises and expectations involved using predictive coding to replicate – with both speed and accuracy – document-level responsive/ not responsive decisions. More recently, in the context of legal discovery, predictive coding has been joined by active learning as well as other machine learning and natural language processing techniques under a broad umbrella of artificial intelligence or "AI solutions." At the same time the conversation has begun to shift from replicating human judgments to replacing human review. And from here it's a slippery slope to conversations about replacing human reviewers. The trouble is, once we're at the point of talking about AI replacing human reviewers, expectations tend to skyrocket. But AI is not magic. And due to a lack of transparency about how AI solutions work, users expecting humans to be eliminated entirely from the process are set up to be disappointed – or worse, to think that they've failed – when it turns out that AI solutions do need help from humans to deliver on their promise. Moreover, users who expect humans to be eliminated entirely from the process can also start to expect their "technology-assisted" solutions to compete with standards and expectations that we don't typically apply to human reviewers or human review. In reality, if your AI solution uses people, it means you're doing the job right. People play a crucial role in the effective implementation in AI-based solutions. As our industry evolves to incorporate advances in artificial intelligence, we must be thoughtful about where and how humans must remain involved. It's time to set some realistic expectations about AI in eDiscovery. 1. There isn't an easy button. Too often, the impression is created that AI is some version of an "easy button" solution for search and information retrieval needs. The recent push for pretrained models and AI products only adds to the notion that AI is an instant turnkey solution. Pretrained models can indeed provide a great jump-start for a matter. They certainly do make elements of discovery easier and more efficient. But they aren't magic. Some pretrained models still require upfront work – sometimes called "training" or "tuning" – before they are ready to go. And just like with more traditional TAR solutions, workflows built around pretrained models require a step that involves running them against your data, taking samples, measuring results, and adjusting when needed. The most effective workflows allow for time to deploy compensatory strategies in specific areas where pretrained models underperform. Yes, AI involves automation, but automatic is not auto-magic. 2. The AI itself isn't the whole solution; it's a part of the solution. AI has been a buzzword for years now. Several solutions that make great use of machine learning and natural language processing technology are gaining traction in the discovery market. Early technology-assisted review solutions helped get eyes on important documents quickly. That was a huge win for discovery professionals. More modern AI-powered solutions add insights into important data and communication trends early in the process. AI now offers tremendous support for early case assessment (ECA) in what feels like an instantly accessible, interactive form. This has been a huge game changer, too. Advances in data visualization paired with the widespread accessibility of custom dashboards mean that discovery practitioners are presented with a wealth of information earlier in the process than ever before.

Articles in this issue

Archives of this issue

view archives of P2P - Summer22