TAR vs. Keyword Search Challenge, Round 6 (Instant Feedback)

Even with a large pool of participants, ample time, and the ability to hone search queries based on instant feedback, nobody was able to generate a better production than Technology-Assisted Review (TAR) when the same amount of review effort was expended. It seems fair to say that keyword search often requires twice as much document review to achieve a production that is as good as what you would get TAR.

en flag
fr flag
de flag
pt flag
es flag

An extract from an article by eDiscovery expert Bill Dimm

This was by far the most significant iteration of the ongoing exercise where I challenge an audience to produce a keyword search that works better than technology-assisted review (also known as predictive coding or supervised machine learning).  There were far more participants than previous rounds, and a structural change in the challenge allowed participants to get immediate feedback on the performance of their queries so they could iteratively improve them.  A total of 1,924 queries were submitted by 42 participants (an average of 45.8 queries per person) and higher recall levels were achieved than in any prior version of the challenge, but the audience still couldn’t beat TAR.

In previous versions of the experiment, the audience submitted search queries on paper or through a web form using their phones, and I evaluated a few of them live on stage to see whether the audience was able to achieve higher recall than TAR.  Because the number of live evaluations was so small, the audience had very little opportunity to use the results to improve their queries.  In the latest iteration, participants each had their own computer in the lab at the 2019 Ipro Tech Show, and the web form evaluated the query and gave the user feedback on the recall achieved immediately.  Furthermore, it displayed the relevance and important keywords for each of the top 100 documents matching the query, so participants could quickly discover useful new search terms to tweak their queries.  This gave participants a significant advantage over a normal e-discovery scenario, since they could try an unlimited number of queries without incurring any cost to make relevance determinations on the retrieved documents in order to decide which keywords would improve the queries.  The number of participants was significantly larger than any of the previous iterations, and they had a full 20 minutes to try as many queries as they wanted.  It was the best chance an audience has ever had of beating TAR. They failed.

Read the complete article at TAR vs. Keyword Search Challenge, Round 6 (Instant Feedback)

Additional Reading

Source: ComplexDiscovery