Thu. Dec 1st, 2022

    CARRM Model

    NOTE: For Advanced Readers only
    This section is only for those individuals who are contemplating the use of Computer Assisted Review or CAR. First-time readers can probably skip this bit and come back to it when they need to.

    In December 2012, the EDRM team published a draft model and definitions for the area of Predictive Coding, otherwise known as Technology Assisted Review (TAR) or, as they (and a lot of other people) prefer to call it, Computer Assisted Review (CAR).

    The model shown below is followed by the text from the EDRM site.


    CARRM Update

    EDRM Technology Assisted Review Reference Model #

    Technology Assisted Review (TAR) is a process of having computer software electronically classify documents based on input from expert reviewers, in an effort to expedite the organization and prioritization of the document collection. The computer classification may include broad topics pertaining to discovery responsiveness, privilege, and other designated issues. TAR (also sometimes called Computer Assisted Review, or CAR) may dramatically reduce the time and cost of reviewing ESI, by reducing the amount of human review needed on documents classified as potentially non-material.

    The framework below was developed in 2012 by an EDRM team to document the steps of the TAR process. Like the EDRM framework, the TAR framework should be a useful reference for e-discovery practitioners at corporations, law firms and elsewhere; e-discovery services and software providers; and organizations evaluating e-discovery tools. In 2017, a new EDRM team is undertaking a project to develop TAR standards, using this framework as the launching point.

    Find out more here.

    CARRM Process Major Steps #

    The Major Steps in the CARRM Process are described below

    Set Goals #

    The process of deciding the outcome of the Computer Assisted Review process for a specific case. Some of the outcomes may be:

    • Reduction and culling of not-relevant documents;
    • Prioritization of the most substantive documents; and
    • Quality control of the human reviewers.

    Set Protocol #

    The process of building the human coding rules that take into account the use of CAR technology. CAR technology must be taught about the document collection by having the human reviewers submit documents to be used as examples of a particular category, e.g. Relevant documents. Creating a coding protocol that can properly incorporate the fact pattern of the case and the training requirements of the CAR system takes place at this stage. An example of a protocol determination is to decide how to treat the coding of family documents during the CAR training process.

    Educate Reviewer #

    The process of transferring the review protocol information to the human reviewers prior to the start of the CAR Review.

    Code Documents #

    The process of human reviewers applying subjective coding decisions to documents in an effort to adequately train the CAR system to “understand” the boundaries of a category, e.g. Relevancy.

    Predict Results #

    The process of the CAR system applying the information “learned” from the human reviewers and classifying a selected document corpus with pre-determined labels.

    Test Results #

    The process of human reviewers using a validation process, typically statistical sampling, in an effort to create a meaningful metric of CAR performance. The metrics can take many forms, they may include estimates in defect counts in the classified population, or use information retrieval metrics like Precision, Recall and F1.

    Evaluate Results #

    The process of the review team deciding if the CAR system has achieved the goals of anticipated by the review team.

    Achieve Goals #

    The process of ending the CAR workflow and moving to the next phase in the review lifecycle, e.g. Privilege Review.

    WARNING: Driving a CAR needs skill.

    CAR technology is very powerful, but needs to be understood. A degree of mathematical skills is required to both use the tool fully and explain the methodology to the other side. If you don’t have those skills, consider adding them to your legal team, possibly from the supplier of the product you are using.

    TAR / CAR is mandated in the eDisclosure pilot for any matter where the review scope is over 50,000 documents. You need to bear this in mind when selecting a supplier.

    CAR is Evolving #

    Some suppliers are now talking about TAR/CAR 2.0, implying that there has been an evolution in the approach to this process. More detail on this in the Market Survey section on Computer/Technology Assisted Review.

    Cooperation in England and Wales #

    Though not part of the EDRM model, this part of the Guide would not be complete without emphasising the focus on cooperation for the eDisclosure process within England and Wales. Practice Direction 31B requires that “the parties and their legal representatives must, before the first case management conference, discuss the use of technology in the management of Electronic Documents and the conduct of proceedings”. PD 51U takes the emphasis on cooperation a lot further in the approach embodied in the joint drafting of the Disclosure Review Document (DRD).

    Consider a meeting with the other side where both parties have; their legal representation, the client(s), the client’s IT representative(s) and the litigation support providers. Use this to agree the various processes you will undertake and how the information will be shared. There is still plenty of room for argument on all the other issues of the case, but in this area you are expected to present a united front to the Judge.

    If you can’t agree then you can apply for directions from the court, but this could be a risky business with no one liking the outcome. Far better to have discussed and agreed up front, and the earlier in the process the better. In some cases engaging an experienced neutral mediator to facilitate the parties in reaching a consensus may be a more satisfactory way of resolving disputes which may arise.

    BEST PRACTICE: Cooperation is not collaboration.

    Working with the other side to smooth the path of eDisclosure is essential. You can still put your arguments and fight your case, just don’t waste time and effort being obstructive. It will almost certainly add to the costs.

    Overall Summary #

    By this stage a reader should be comfortable with the definition of eDisclosure and the various stages it encompasses. They should also be familiar (at a high level) of what involvement they as a lawyer might have with each part of the process and what tools and service providers are available to help them.

    To summarise the current position, most of the significant “players” in the litigation software world have similar capabilities, albeit they might be grouped as ECA on one hand, or litigation support on the other. The main products are truly Unicode compliant, have near duplicate facilities, “cluster” data into concepts without intervention from users, as well delivering a rich search environment and the ability to easily manipulate the results of enquiries.

    The differences are evident in which area of the EDRM the product addresses. The ECA tools are far more focused on processing large volumes of emails and their attachments, with emphasis on various techniques to try and identify the potentially relevant data. Litigation support software has more focus on the review of documents for relevance and privilege, and the preparation of a case around identified themes, leading to a disclosure exchange and downstream courtroom production. Confusion arises because the various products are continuing to mature by absorbing functionality from competitors. Thus ECA tools drift into the right of the EDRM and litigation support products to the left.

    Now we add to this complex mix the whole concept of Computer Assisted Review (CAR) which can be presented as a “Black box technology that supplants lawyers, so be afraid, be very afraid”, when nothing is further from the truth.

    Where this leaves readers trying to assess which product they should choose, is that they have to evaluate what piece of software works best for them, and their circumstances. Unlike scanning, coding and (to some extent) forensic support services, it is not possible to select a supplier on price and functionality alone. Firms need to evaluate the software by means of demonstrations (preferably with their own data) and then (optionally) trialing rival products against each other to gain an understanding of what suits their individual unique requirements and work mix.

    The remainder of this Guide aims to provide information to enable readers to achieve those aims.

    NOTE: What’s Next?
    The rest of the Guide take you through all the things you need to know in order to procure Litigation Support services and software. If you are not at that stage yet, then you can stop now, though there is some good detail on pitfalls and technical issues in Chapter 5 you might want to skim through.

    Beyond the Perimeter? The DoD Zero Trust Strategy and Roadmap

    Current and future cyber threats and attacks drive the need for...

    Balancing Spend and Standards? Cybersecurity Investments in the European Union

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “The...

    Stricter Supervisory and Enforcement Measures? European Parliament Adopts New Cybersecurity Law

    According to European Member of Parliament (MEP) Bart Groothuis, “Ransomware and...

    Geopolitical Shakedowns? The Annual ENISA Threat Landscape Report – 10th Edition

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “Today's...

    A Technology-Driven Solution? Integreon Announces New Chief Executive Officer

    Subroto’s people-first leadership style combined with his passion for leveraging technology...

    A Magnet for Revenue? Magnet Forensics Announces 2022 Third Quarter Results

    According to Adam Belsher, Magnet Forensics' CEO, "Our solutions address the...

    Progress and Opportunity? Cellebrite Announces Third Quarter 2022 Results

    “We are pleased to report a solid third quarter, delivering strong...

    Fueling Continued Growth? Renovus Capital Acquires Advisory Business from HBR Consulting

    "The legal industry remains in the early stages of digital and...

    An eDiscovery Market Size Mashup: 2022-2027 Worldwide Software and Services Overview

    From retraction to resurgence and acceleration, the worldwide market for eDiscovery...

    On the Move? 2022 eDiscovery Market Kinetics: Five Areas of Interest

    Recently ComplexDiscovery was provided an opportunity to share with the eDiscovery...

    Trusting the Process? 2021 eDiscovery Processing Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    The Year in Review? 2021 eDiscovery Review Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    Five Great Reads on Cyber, Data, and Legal Discovery for November 2022

    From cyber shakedowns and threats to the total cost of eDiscovery...

    Five Great Reads on Cyber, Data, and Legal Discovery for October 2022

    From cyber claims and data privacy to corporate litigation and the...

    Five Great Reads on Cyber, Data, and Legal Discovery for September 2022

    From privacy legislation and special masters to acquisitions and investigations, the...

    Five Great Reads on Cyber, Data, and Legal Discovery for August 2022

    From AI and Big Data challenges to intriguing financial and investment...

    Onsite or Remote? Document Reviewer Preferences Survey (Winter 2023)

    Today CompexDiscovery expands that survey portfolio by introducing a new business...

    In The House? The Fall 2022 eDiscovery Total Cost of Ownership Survey – Final Results

    Today CompexDiscovery shares the results of a new business survey focused...

    Cold Front Concerns? Eighteen Observations on eDiscovery Business Confidence in the Fall of 2022

    In the fall of 2022, 49.0% of survey respondents felt that...

    Stereotyping Data? Issues Impacting eDiscovery Business Performance: A Fall 2022 Overview

    In the fall of 2022, 28.0% of respondents viewed increasing types...

    The Arrival of General Winter? Ukraine Conflict Assessments in Maps (November 21-27, 2022)

    According to a recent update from the Institute for the Study...

    Digging Out and Digging In? Ukraine Conflict Assessments in Maps (November 14-20, 2022)

    According to a recent update from the Institute for the Study...

    A Liberating Momentum? Ukraine Conflict Assessments in Maps (November 7-13, 2022)

    According to a recent update from the Institute for the Study...

    Rhetoric or Reality? Ukraine Conflict Assessments in Maps (November 1-6, 2022)

    According to a recent update from the Institute for the Study...