-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

The New Axiom of Computer-assisted Review

Offering legal teams a forward-thinking solution to the problem of big data, the topic of computer-assisted review is the conversation du jour in the e-discovery marketplace. There's good reason for that. According to a recent Rand Corporation survey, document review is the single most expensive step in the discovery process. Logically, the option to reduce time wasted on irrelevant documents can make a big difference in any case, particularly on costs. Therefore, when it comes to document review, legal teams see huge value in the ability to quickly separate the wheat from the chaff via battle-tested computer-assisted review software.

When the computer-assisted review conversation first started, however, there was skepticism and those in the industry questioned the efficacy of the approach. But times are already changing: four major court cases involved decisions based on the use of computer-assisted review. In Da Silva Moore, Judge Peck ordered the use of computer-assisted review despite the plaintiffs' objection. More recently, a Delaware Chancery Court judge ordered its use despite the fact that neither party had mentioned the workflow. Granted, there's still some skepticism to be overcome—and some education on the topic to be shared—but, as Judge Peck put it, we should no longer "fear the black box."

Those examples tell us that the conversation has shifted, and we've seen additional support through real-world stories from review teams we work with. In these cases, teams have proven that they understand how the computer-assisted review process is more than technology, but that it's driven by a talented group of people working hand-in-hand with an effective methodology. As a result, these teams have been able to leverage the technology to great effect. Additionally, the number of active cases running analytics in our software has increased by more than 150% in the last year—further demonstrating its acceptance.

As adoption grows, the industry is learning how a combination of factors makes a computer-assisted review successful. Experts, engine and validation is the axiom that defines computer-assisted review. These three items are the core components of an effective computer-assisted review process. They are in no particular order; all three affect the end results, and each is critically important.

The Experts

When using computer-assisted review, what teams are finding—and what we're consistently encouraging our users to consider early on in a case—is that the reviewers involved in a computer-assisted review are the ones actually producing the results that the computer suggests. As much as it may seem like a black box, the computer results do not magically appear.

The reason for that is simple: someone needs to train the computer to do its job, and there's no one better for that task than the reviewers with expertise and insight into a case. As with human reviewers, an out-of-the-box algorithm won't automatically know what to look for when it's faced with thousands or millions of documents. It, too, needs to learn from an expert teacher how to do its job well. The benefit of the computer, of course, is that it can apply this learning much faster while still giving reviewers the opportunity to validate the results with statistics, which we'll discuss in more detail later.

The trick behind that quick learning is the computer's absolute acceptance of what it's being told. During a training round, an incorrect—or even borderline—decision submitted to the computer may train the algorithm inaccurately. At the same time, a document that may be relevant to the case, but that does not contain the right amount of text to train the text-based system—a calendar invitation, for example—could also confuse the computer. Thus, it is extremely important to have reviewers not only understand the issues, but understand how to properly train the system. Solid results from a smaller manual effort can be replicated across a document universe, slashing review time and cost while maintaining validated, defensibly sound and effective results.

In one recent case, Am Law 100 firm McDermott Will & Emery used computer-assisted review for a second request. Surrounding a large merger, the document universe included more than 1.6 million records after de-duplication and initial date filtering. Based on McDermott's calculations, a linear review of those documents would've cost $2.4 million—an unsatisfying price tag. However, the team had plenty of expertise in the case, so they could easily use computer-assisted review to train the system with good examples and ultimately amplify their efforts. The team removed more than 1.1 million of the original documents from traditional review. In the end, McDermott saved their client more than $2 million.

In another example, Am Law 200 firm Dickstein Shapiro received a production set of 1,000,000 pages of documents from opposing counsel. Rather than using the projected staffing of 10 contract attorneys and 2,800 hours of review time, the team used computer-assisted review and managed to review the data set with three of their own attorneys and 250 hours of review time. Based on projections, the firm anticipated savings of more than $120,000 on review—while, at the same time, using their own expert team rather than contract attorneys. They had the benefit of training the system with true subject matter experts—which made the process quick and easy with great results-without sacrificing cost.

As is clearly indicated by these cases, human expertise is the foundational element in a successful computer-assisted review. A human team establishes the rules and issues of any review; in an assisted process, the computer simply propagates that judgment across a document universe in a faster, more cost-effective and more consistent way.

The Engine

The engine under the hood of a strong computer-assisted review workflow relies on categorization technology. In short, this engine is programmed for two tasks: first, to understand the logic it's given by its operators; and second, to propagate that logic against a larger population.

In the context of review, that means the engine is taught to recognize the original coding decisions of an expert, and then amplify that expert's efforts across the document universe. The lessons it takes from that expert's instruction are the basis of its work behind the scenes, because the algorithm relies on that logic to make its decisions on all other documents.

While users are now gaining a better understanding of the human element in computer-assisted review, the machine-learning algorithms themselves are backed by years of research and use, and similar technology is widely used in other industries-ranging from product suggestions in online marketplaces to law enforcement officials honing in on criminal records during a search—on a daily basis.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues