I cannot tell you how many times I see the word “complaint” in their keyword list. The guessing involved reminds me of the child’s game of Go Fish.
The goal of the pro-machine approach of Professors Cormack and Grossman, and others, is to minimize human judgments, no matter how skilled, and thereby reduce as much as possible the influence of human error and outright fraud.
“After 2011, vendors started slapping TAR labels on everything,” Grossman recalls. “Some of it resembled what we tested and some of it didn’t. Either way, our article was often invoked.”
Many of the software companies that made the multi-million dollar investments necessary to go to the next step and build document review platforms with active machine learning algorithms have since been bought out by big-tech and repurposed out of the e-discovery market.
Overwhelmingly, the primary measurement of the efficacy of the winnowing process in eDiscovery is Recall.
Contrary to what some vendors will tell you (typically the ones without bona fide predictive coding features), predictive coding 3.0, and now 4.0, methods are not rocket science.
Effective predictive coding requires good technology, good methods for applying that technology, and good judgment to guide the technology.
Logikcull raises $10M to let lawyers analyze documents at the speed of a thousand interns.
The following letter is from Bill Speros, Attorney Consulting in Evidence Management with Speros & Associates. He is responding to commentary from Information Retrieval expert Gordon Cormack, published following the ACEDS’ webinar How Automation is Revolutionizing E-Discovery.
The following letter is from Dr. Bill Dimm, the founder and CEO of Hot Neuron LLC. He has developed the algorithms for conceptual clustering, near-dupe detection, and predictive coding used in his company’s Clustify software.