|
Content Assessment: Predictive Coding Technologies and Protocols Survey – Spring 2021 Results
Information - 95%
Insight - 95%
Relevance - 95%
Objectivity - 95%
Authority - 95%
95%
Excellent
A short percentage-based assessment of the qualitative benefit of the recent spring 2021 predictive coding technologies and protocols survey.
Editor’s Note: These are the results of the sixth semi-annual Predictive Coding Technologies and Protocols Survey conducted by ComplexDiscovery. As of today, the six surveys have provided detailed feedback from 384 legal, business, and technology professionals on the use of specific machine learning technologies in predictive coding. The surveys have also provided insight into the use of those machine learning technologies as part of example technology-assisted review protocols.
This iteration of the survey had 65 responders and continued to focus on predictive coding technologies, protocols, workflows, and uses across the eDiscovery ecosystem.
The Predictive Coding Technologies and Protocols Spring 2021 Survey
The Predictive Coding Technologies and Protocols Survey is a non-scientific survey designed to help provide a general understanding of the use of predictive coding technologies, protocols, and workflows by data discovery and legal discovery professionals within the eDiscovery ecosystem. The spring 2021 survey was open from February 7, 2021, through February 18, 2021, with individuals invited to participate directly by ComplexDiscovery.
Designed to provide a general understanding of predictive coding technologies and protocols, the survey had two primary educational objectives:
- To provide a consolidated listing of potential predictive coding technology, protocol, and workflow definitions. While not all-inclusive or comprehensive, the listing was vetted with selected industry predictive coding experts for completeness and accuracy, thus it appears to be profitable for use in educational efforts.
- To ask eDiscovery ecosystem professionals about their preferences and patterns of use regarding predictive coding platforms, technologies, protocols, workflows, and areas of usage.
The survey offered responders an opportunity to provide predictive coding background information, including their primary predictive coding platform, as well as posed five specific questions to responders. Those questions being:
- How often do you use predictive coding as part of your eDiscovery workflow? (Prevalence)
- Which predictive coding technologies are utilized by your eDiscovery platform? (Technologies)
- Which technology-assisted review protocols are utilized in your delivery of predictive coding? (Protocols)
- What is the primary technology-assisted review workflow utilized in your delivery of predictive coding? (Workflow)
- What are the areas where you use technology-assisted review technologies, protocols, and workflows? (Areas of Usage)
Closed on February 18, 2020, the spring survey had 65 responders.
Key Results and Observations
Predictive Coding Technology and Protocol Survey Responder Overview (Chart 1)
- 50.77% of responders were from software or services provider organizations.
- 27.69% of responders were from law firms.
- The remaining 21.54% of responders were either part of a consultancy (12.31%), the government (4.62), a corporation (1.54%), or another type of entity (3.08%).
Primary Predictive Coding Platform (Chart 2)
- There were 24 different platforms reported as a primary predictive coding platform by responders.
- Relativity was reported as a primary predictive coding platform by 36.92% of survey responders.
- The top two platforms were reported as a primary predictive coding platform by 53.85% of survey responders.
- 1.54% of responders reported they had no primary platform for predictive coding.
Prevalence of Predictive Coding Usage in eDiscovery (Chart 3)
- More than one-third of survey responders (38.46%) reported using predictive coding in their eDiscovery workflow more than 50% of the time.
- 80% of responders reported using predictive coding in their eDiscovery workflow at least 5% of the time.
- Only 20% of responders reported using predictive coding in their eDiscovery workflow less than 5% of the time.
Predictive Coding Technology Employment (Chart 4)
- Active Learning was reported as the most used predictive coding technology with 93.21% of responders using it in their predictive coding efforts.
- 44.62% of responders reported using only one predictive coding technology in their predictive coding efforts.
- 53.85% of responders reported using more than one predictive coding technology in their predictive coding efforts.
- 1.54% of responders did not report using any specific predictive coding technology.
Technology-Assisted Review Protocol Employment (Chart 5)
- All listed technology-assisted protocols for predictive coding were reported as being used by at least one survey responder.
- Continuous Active Learning® (CAL®) was reported as the most used predictive coding protocol with 84.62% of responders using it in their predictive coding efforts.
- 55.38% of responders reported using only one predictive coding protocol in their predictive coding efforts.
- 43.08% of responders reported using more than one predictive coding protocol in their predictive coding efforts.
- 1.54% of responders reported not using any predictive coding protocol.
Technology-Assisted Review Workflow Employment (Chart 6)
- 72.31% of responders reported using Technology-Assisted Review (TAR) 2.0 as a primary workflow in the delivery of predictive coding.
- 6.16% of responders reported using TAR 1.0 and 13.85% of responders reported using TAR 3.0 as a primary workflow in the delivery of predictive coding.
- 7.69% of responders did not report using TAR 1.0, TAR 2.0, or TAR 3.0 as a primary workflow in the delivery of predictive coding.
Technology-Assisted Review Uses (Chart 7)
- 87.69% of responders reported using technology-assisted review in more than one area of data and legal discovery.
- 92.31% of responders reported using technology-assisted review for the identification of relevant documents.
- 13.85% of responders reported using technology-assisted review for information governance and data disposition.
Survey Charts
(Charts can be expanded for detailed viewing.)
Chart 1: Survey Responder Overview (Background)
1 – Predictive Coding Technologies and Protocols Survey Overview – Spring 2021Chart 2: Name of Primary Predictive Coding Platform (Background)
2 – Primary Predictive Coding Platform – Spring 2021Chart 3: How often do you use predictive coding as part of your eDiscovery workflow? (Question #1)
3 – Predictive Coding Usage – Spring 2021Chart 4: Which predictive coding technologies are utilized by your eDiscovery platform? (Question #2)
4- Predictive Coding Technology Usage – Spring 2021Chart 5: Which technology-assisted review protocols are utilized in your delivery of predictive coding? (Question #3)
5 – Technology-Assisted Review Protocol Usage – Spring 2021Chart 6: What is the primary technology-assisted review workflow utilized in your delivery of predictive coding? (Question #4)
6 – Technology-Assisted Review Workflow Usage – Spring 2021Chart 7: What are the areas where you use technology-assisted review technologies, protocols, and workflows? (Question #5)
7 – Technology-Assisted Review Uses – Spring 2021Predictive Coding Technologies and Protocols (Survey Backgrounder)
As defined in The Grossman-Cormack Glossary of Technology-Assisted Review (1), Predictive Coding is an industry-specific term generally used to describe a technology-assisted review process involving the use of a machine learning algorithm to distinguish relevant from non-relevant documents, based on a subject matter expert’s coding of a training set of documents. This definition of predictive coding provides a baseline description that identifies one particular function that a general set of commonly accepted machine learning algorithms may use in a technology-assisted review (TAR).
With the growing awareness and use of predictive coding in the legal arena today, it appears that it is increasingly more important for electronic discovery professionals to have a general understanding of the technologies that may be implemented in electronic discovery platforms to facilitate predictive coding of electronically stored information. This general understanding is essential as each potential algorithmic approach has efficiency advantages and disadvantages that may impact the efficiency and efficacy of predictive coding.
To help in developing this general understanding of predictive coding technologies and to provide an opportunity for electronic discovery providers to share the technologies and protocols they use in and with their platforms to accomplish predictive coding, the following working lists of predictive coding technologies and TAR protocols are provided for your use. Working lists on predictive coding workflows and uses are also included for your consideration as they help define how the predictive coding technologies and TAR protocols are implemented and used.
A Working List of Predictive Coding Technologies (1,2,3,4)
Aggregated from electronic discovery experts based on professional publications and personal conversations, provided below is a non-all inclusive working list of identified machine learning technologies that have been applied or have the potential to be applied to the discipline of eDiscovery to facilitate predictive coding. This working list is designed to provide a reference point for identified predictive coding technologies and may over time include additions, adjustments, and amendments based on feedback from experts and organizations applying and implementing these mainstream technologies in their specific eDiscovery platforms.
Listed in Alphabetical Order
- Active Learning: A process, typically iterative, whereby an algorithm is used to select documents that should be reviewed for training based on a strategy to help the classification algorithm learn efficiently.
- Decision Tree: A step-by-step method of distinguishing between relevant and non-relevant documents, depending on what combination of words (or other features) they contain. A Decision Tree to identify documents pertaining to financial derivatives might first determine whether or not a document contained the word “swap.” If it did, the Decision Tree might then determine whether or not the document contained “credit,” and so on. A Decision Tree may be created either through knowledge engineering or machine learning.
- k-Nearest Neighbor Classifier (k-NN): A classification algorithm that analyzes the k example documents that are most similar (nearest) to the document being classified in order to determine the best classification for the document. If k is too small (e.g., k=1), it may be extremely difficult to achieve high recall.
- Latent Semantic Analysis (LSA): A mathematical representation of documents that treats highly correlated words (i.e., words that tend to occur in the same documents) as being, in a sense, equivalent or interchangeable. This equivalency or interchangeability can allow algorithms to identify documents as being conceptually similar even when they aren’t using the same words (e.g., because synonyms may be highly correlated), though it also discards some potentially useful information and can lead to undesirable results caused by spurious correlations.
- Logistic Regression: A state-of-the-art supervised learning algorithm for machine learning that estimates the probability that a document is relevant, based on the features that it contains. In contrast to the Naïve Bayes, algorithm, Logistic Regression identifies features that discriminate between relevant and non-relevant documents.
- Naïve Bayesian Classifier: A system that examines the probability that each word in a new document came from the word distribution derived from a trained responsive document or trained non-responsive documents. The system is naïve in the sense that it assumes that all words are independent of one another.
- Neural Network: An Artificial Neural Network (ANN) is a computational model. It is based on the structure and functions of biological neural networks. It works like the way the human brain processes information. It includes a large number of connected processing units that work together to process information.
- Probabilistic Latent Semantic Analysis (PLSA): This is similar in spirit to LSA but it uses a probabilistic model to achieve results that are expected to be better.
- Random Forests: An ensemble learning method for classification, regression, and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees’ habit of overfitting to their training set.
- Relevance Feedback: An active learning process in which the documents with the highest likelihood of relevance are coded by a human, and added to the training set.
- Support Vector Machine: A mathematical approach that seeks to find a line that separates responsive from non-responsive documents so that, ideally, all of the responsive documents are on one side of the line and all of the non-responsive ones are on the other side.
General TAR Protocols (5,6,7,8,9,10)
Additionally, these technologies are generally employed as part of a TAR protocol which determines how the technologies are used. Examples of TAR protocols include:
Listed in Alphabetical Order
- Continuous Active Learning® (CAL®): In CAL®, the TAR method developed, used, and advocated by Maura R. Grossman and Gordon V. Cormack, after the initial training set, the learner repeatedly selects the next-most-likely-to-be-relevant documents (that have not yet been considered) for review, coding, and training, and continues to do so until it can no longer find any more relevant documents. There is generally no second review because, by the time the learner stops learning, all documents deemed relevant by the learner have already been identified and manually reviewed.
- Hybrid Multimodal Method: An approach developed by the e-Discovery Team (Ralph Losey) that includes all types of search methods, with primary reliance placed on predictive coding and the use of high-ranked documents for continuous active training.
- Scalable Continuous Active Learning (S-CAL): The essential difference between S-CAL and CAL® is that for S-CAL, only a finite sample of documents from each successive batch is selected for labeling, and the process continues until the collection—or a large random sample of the collection—is exhausted. Together, the finite samples form a stratified sample of the document population, from which a statistical estimate of ρ may be derived.
- Simple Active Learning (SAL): In SAL methods, after the initial training set, the learner selects the documents to be reviewed and coded by the teacher, and used as training examples, and continues to select examples until it is sufficiently trained. Typically, the documents the learner chooses are those about which the learner is least certain, and therefore from which it will learn the most. Once sufficiently trained, the learner is then used to label every document in the collection. As with SPL, the documents labeled as relevant are generally re-reviewed manually.
- Simple Passive Learning (SPL): In simple passive learning (“SPL”) methods, the teacher (i.e., human operator) selects the documents to be used as training examples; the learner is trained using these examples, and once sufficiently trained, is used to label every document in the collection as relevant or non-relevant. Generally, the documents labeled as relevant by the learner are re-reviewed manually. This manual review represents a small fraction of the collection, and hence a small fraction of the time and cost of an exhaustive manual review.
TAR Workflows (11)
TAR workflows represent the practical application of predictive coding technologies and protocols to define approaches to completing predictive coding tasks. Three examples of TAR workflows include:
- TAR 1.0 involves a training phase followed by a review phase with a control set being used to determine the optimal point when you should switch from training to review. The system no longer learns once the training phase is completed. The control set is a random set of documents that have been reviewed and marked as relevant or non-relevant. The control set documents are not used to train the system. They are used to assess the system’s predictions so training can be terminated when the benefits of additional training no longer outweigh the cost of additional training. Training can be with randomly selected documents, known as Simple Passive Learning (SPL), or it can involve documents chosen by the system to optimize learning efficiency, known as Simple Active Learning (SAL).
- TAR 2.0 uses an approach called Continuous Active Learning® (CAL®), meaning that there is no separation between training and review–the system continues to learn throughout. While many approaches may be used to select documents for review, a significant component of CAL® is many iterations of predicting which documents are most likely to be relevant, reviewing them, and updating the predictions. Unlike TAR 1.0, TAR 2.0 tends to be very efficient even when prevalence is low. Since there is no separation between training and review, TAR 2.0 does not require a control set. Generating a control set can involve reviewing a large (especially when prevalence is low) number of non-relevant documents, so avoiding control sets is desirable.
- TAR 3.0 requires a high-quality conceptual clustering algorithm that forms narrowly focused clusters of fixed size in concept space. It applies the TAR 2.0 methodology to just the cluster centers, which ensures that a diverse set of potentially relevant documents are reviewed. Once no more relevant cluster centers can be found, the reviewed cluster centers are used as training documents to make predictions for the full document population. There is no need for a control set–the system is well-trained when no additional relevant cluster centers can be found. Analysis of the cluster centers that were reviewed provides an estimate of the prevalence and the number of non-relevant documents that would be produced if documents were produced based purely on the predictions without human review. The user can decide to produce documents (not identified as potentially privileged) without review, similar to SAL from TAR 1.0 (but without a control set), or he/she can decide to review documents that have too much risk of being non-relevant (which can be used as additional training for the system, i.e., CAL®). The key point is that the user has the info he/she needs to make a decision about how to proceed after completing review of the cluster centers that are likely to be relevant, and nothing done before that point becomes invalidated by the decision (compare to starting with TAR 1.0, reviewing a control set, finding that the predictions aren’t good enough to produce documents without review, and then switching to TAR 2.0, which renders the control set virtually useless).
TAR Uses (12)
TAR technologies, protocols, and workflows can be used effectively to help eDiscovery professionals accomplish many data discovery and legal discovery tasks. Nine commonly considered examples of TAR use include:
- Identification of Relevant Documents
- Early Case Assessment/Investigation
- Prioritization for Review
- Categorization (By Issues, For Confidentiality or Privacy)
- Privilege Review
- Quality Control and Quality Assurance
- Review of Incoming Productions
- Disposition/Trial Preparation
- Information Governance and Data Disposition
Survey Information (13,14,15,16,17,18, 19, 20)
References
(1) Grossman, M. and Cormack, G. (2013). The Grossman-Cormack Glossary of Technology-Assisted Review. [ebook] Federal Courts Law Review. Available at: http://www.fclr.org/fclr/articles/html/2010/grossman.pdf [Accessed 31 Aug. 2018].
(2) Dimm, B. (2018). Expertise on Predictive Coding. [email].
(3) Roitblat, H. (2013). Introduction to Predictive Coding. [ebook] OrcaTec. Available at: https://theolp.wildapricot.org/Resources/Documents/Introduction%20to%20Predictive%20Coding%20-%20Herb%20Roitblat.pdf [Accessed 31 Aug. 2018].
(4) Tredennick, J. and Pickens, J. (2017). Deep Learning in E-Discovery: Moving Past the Hype. [online] Catalystsecure.com. Available at: https://catalystsecure.com/blog/2017/07/deep-learning-in-e-discovery-moving-past-the-hype/ [Accessed 31 Aug. 2018].
(5) Grossman, M. and Cormack, G. (2017). Technology-Assisted Review in Electronic Discovery. [ebook] Available at: https://judicialstudies.duke.edu/wp-content/uploads/2017/07/Panel-1_TECHNOLOGY-ASSISTED-REVIEW-IN-ELECTRONIC-DISCOVERY.pdf [Accessed 31 Aug. 2018].
(6) Grossman, M. and Cormack, G. (2016). Continuous Active Learning for TAR. [ebook] Practical Law. Available at: https://pdfs.semanticscholar.org/ed81/f3e1d35d459c95c7ef60b1ba0b3a202e4400.pdf [Accessed 31 Aug. 2018].
(7) Grossman, M. and Cormack, G. (2016). Scalability of Continuous Active Learning for Reliable High-Recall Text Classification. [ebook] Available at: https://plg.uwaterloo.ca/~gvcormac/scal/cormackgrossman16a.pdf [Accessed 3 Sep. 2018].
(8) Losey, R., Sullivan, J. and Reichenberger, T. (2015). e-Discovery Team at TREC 2015 Total Recall Track. [ebook] Available at: https://trec.nist.gov/pubs/trec24/papers/eDiscoveryTeam-TR.pdf [Accessed 1 Sep. 2018].
(9) “CONTINUOUS ACTIVE LEARNING Trademark Of Maura Grossman And Gordon V. Cormack – Registration Number 5876987 – Serial Number 86634255 :: Justia Trademarks”. Trademarks.Justia.Com, 2020, https://trademarks.justia.com/866/34/continuous-active-86634255.html [Accessed 12 Feb. 2020].
(10) “CAL Trademark Of Maura Grossman And Gordon V. Cormack – Registration Number 5876988 – Serial Number 86634265 :: Justia Trademarks”. Trademarks.Justia.Com, 2020, https://trademarks.justia.com/866/34/cal-86634265.html [Accessed 12 Feb. 2020].
(11) Dimm, B. (2016), TAR 3.0 Performance. [online] Clustify Blog – eDiscovery, Document Clustering, Predictive Coding, Information Retrieval, and Software Development. Available at: https://blog.cluster-text.com/2016/01/28/tar-3-0-performance/ [Accessed 18 Feb. 2019].
(12) Electronic Discovery Reference Model (EDRM) (2019). Technology Assisted Review (TAR) Guidelines. [online] Available at: https://www.edrm.net/wp-content/uploads/2019/02/TAR-Guidelines-Final.pdf [Accessed 18 Feb. 2019].
(13) Dimm, B. (2018). TAR, Proportionality, and Bad Algorithms (1-NN). [online] Clustify Blog – eDiscovery, Document Clustering, Predictive Coding, Information Retrieval, and Software Development. Available at: https://blog.cluster-text.com/2018/08/13/tar-proportionality-and-bad-algorithms-1-nn/ [Accessed 31 Aug. 2018].
(14) Robinson, R. (2013). Running Results: Predictive Coding One-Question Provider Implementation Survey. [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/2013/03/05/running-results-predictive-coding-one-question-provider-implementation-survey/ [Accessed 31 Aug. 2018].
(15) Robinson, R. (2018). A Running List: Top 100+ eDiscovery Providers. [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/2017/01/19/28252/ [Accessed 31 Aug. 2018].
(16) Robinson, R. (2018) Relatively Speaking: Predictive Coding Technologies and Protocols Survey Results [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/relatively-speaking-predictive-coding-technologies-and-protocols-survey-results/ [Accessed 18 Feb. 2019].
(17) Robinson, R. (2019) Actively Learning? Predictive Coding Technologies and Protocols Survey Results [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/actively-learning-predictive-coding-technologies-and-protocols-survey-spring-2019-results/ [Accessed 22 Aug. 2019]
(18) Robinson, R. (2019) From Platforms to Workflows: Predictive Coding Technologies and Protocols Survey – Fall 2019 Results [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/from-platforms-to-workflows-predictive-coding-technologies-and-protocols-survey-fall-2019-results/ [Accessed 12 Feb. 2020].
(19) Robinson, R. (2020) Is It All Relative? Predictive Coding Technologies and Protocols Survey – Spring Results [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/is-it-all-relative-predictive-coding-technologies-and-protocols-survey-spring-2020-results/ [Accessed August 7, 2020].
(20) Robinson, R. (2020) Casting a Wider Net? Predictive Coding Technologies and Protocols Survey – Fall 2020 [online] ComplexDiscovery: eDiscovery Information. Available at: https://complexdiscovery.com/casting-a-wider-net-predictive-coding-technologies-and-protocols-survey-fall-2020-results/ [Accessed February 5, 2021].
Click here to provide specific additions, corrections, and updates.
Snapshot: Predictive Coding Technologies and Protocols Survey Responders – Six Surveys
Predictive Coding Survey Respondents – Six SurveysSource: ComplexDiscovery