Fri. Dec 2nd, 2022
    en flag
    nl flag
    et flag
    fi flag
    fr flag
    de flag
    he flag
    ja flag
    lv flag
    pl flag
    pt flag
    es flag
    uk flag

    Content Assessment: Platforms Dream of Electronic Shepherds? Combatting Online Harms Through Innovation (FTC)

    Information - 95%
    Insight - 96%
    Relevance - 91%
    Objectivity - 92%
    Authority - 92%



    A short percentage-based assessment of the qualitative benefit of the recent FTC report to Congress on the topic of combatting online harms through innovation.

    Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.

    To submit recommendations for consideration and inclusion in ComplexDiscovery’s cyber, data, and legal discovery-centric service, product, or research announcements, contact us today.

    Background Note: Recently the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The report may be beneficial for cybersecurity, information governance, and legal discovery professionals seeking to better understand the concerns and challenges of combatting online harms as outlined and accentuated by the Federal Trade Commission.

    Federal Trade Commission Report*

    Combatting Online Harms Through Innovation – FTC Report to Congress

    Executive Summary Extract

    The deployment of AI tools intended to detect or otherwise address harmful online content is accelerating. Largely within the confines — or via funding from — the few big technology companies that have the necessary resources and infrastructure, AI tools are being conceived, developed, and used for purposes including combat against many of the harms listed by Congress. Given the amount of online content at issue, this result appears to be inevitable, as a strictly human alternative is impossible or extremely costly at scale.

    Nonetheless, it is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment. Their use — both now and in the future — raises a host of persistent legal and policy concerns. The key conclusion of this report is thus that governments, platforms, and others must exercise great caution in either mandating the use of, or over-relying on, these tools even for the important purpose of reducing harms. Although outside of our scope, this conclusion implies that, if AI is not the answer and if the scale makes meaningful human oversight infeasible, we must look at other ways, regulatory or otherwise, to address the spread of these harms.

    A central failing of these tools is that the datasets supporting them are often not robust or accurate enough to avoid false positives or false negatives. Part of the problem is that automated systems are trained on previously identified data and then have problems identifying new phenomena (e.g., misinformation about COVID-19). Mistaken outcomes may also result from problems with how a given algorithm is designed. Another issue is that the tools use proxies that stand in for some actual type of content, even though that content is often too complex, dynamic, and subjective to capture, no matter what amount and quality of data one has collected. In fact, the way that researchers classify content in the training data generally includes removing complexity and context — the very things that in some cases the tools need to distinguish between content that is or is not harmful. These challenges mean that developers and operators of these tools are necessarily reactive and that the tools — assuming they work — need constant adjustment even when they are built to make their own adjustments.

    The limitations of these tools go well beyond merely inaccurate results. In some instances, increased accuracy could itself lead to other harms, such as enabling increasingly invasive forms of surveillance. Even with good intentions, their use can also lead to exacerbating harms via bias, discrimination, and censorship. Again, these results may reflect problems with the training data (possibly chosen or classified based on flawed judgments or mislabeled by insufficiently trained workers), the algorithmic design, or preconceptions that data scientists introduce inadvertently. They can also result from the fact that some content is subject to different and shifting meanings, especially across different cultures and languages. These bad outcomes may also depend on who is using the tools and their incentives for doing so, and on whether the tool is being used for a purpose other than the specific one for which it was built.

    Further, as these AI tools are developed and deployed, those with harmful agendas — whether adversarial nations, violent extremists, criminals, or other bad actors — seek actively to evade and manipulate them, often using their own sophisticated tools. This state of affairs, often referred to as an arms race or cat-and-mouse game, is a common aspect of many kinds of new technology, such as in the area of cybersecurity. This unfortunate feature will not be going away, and the main struggle here is to ensure that adversaries are not in the lead. This task includes considering possible evasions and manipulations at the tool development stage and being vigilant about them after deployment. However, this brittleness in the tools — the fact that they can fail with even small modifications to inputs — may be an inherent flaw.

    While AI continues to advance in this area, including with existing government support, all of these significant concerns suggest that Congress, regulators, platforms, scientists, and others should exercise great care and focus attention on several related considerations.

    Read the announcement of the report.

    Read the Complete Report: Combatting Online Harms Through Innovation (PDF) – Mouseover to Scroll

    Combatting Online Harms Through Innovation; Federal Trade Commission Report to Congress

    Read the original report.

    *Shared with permission.

    Additional Reading

    Source: ComplexDiscovery


    Have a Request?

    If you have information or offering requests that you would like to ask us about, please let us know and we will make our response to you a priority.

    ComplexDiscovery is an online publication that highlights cyber, data, and legal discovery insight and intelligence ranging from original research to aggregated news for use by cybersecurity, information governance, and eDiscovery professionals. The highly targeted publication seeks to increase the collective understanding of readers regarding cyber, data, and legal discovery information and issues and to provide an objective resource for considering trends, technologies, and services related to electronically stored information.

    ComplexDiscovery OÜ is a technology marketing firm providing strategic planning and tactical execution expertise in support of cyber, data, and legal discovery organizations. Focused primarily on supporting the ComplexDiscovery publication, the company is registered as a private limited company in the European Union country of Estonia, one of the most digitally advanced countries in the world. The company operates virtually worldwide to deliver marketing consulting and services.

    Beyond the Perimeter? The DoD Zero Trust Strategy and Roadmap

    Current and future cyber threats and attacks drive the need for...

    Balancing Spend and Standards? Cybersecurity Investments in the European Union

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “The...

    Stricter Supervisory and Enforcement Measures? European Parliament Adopts New Cybersecurity Law

    According to European Member of Parliament (MEP) Bart Groothuis, “Ransomware and...

    Geopolitical Shakedowns? The Annual ENISA Threat Landscape Report – 10th Edition

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “Today's...

    A Technology-Driven Solution? Integreon Announces New Chief Executive Officer

    Subroto’s people-first leadership style combined with his passion for leveraging technology...

    A Magnet for Revenue? Magnet Forensics Announces 2022 Third Quarter Results

    According to Adam Belsher, Magnet Forensics' CEO, "Our solutions address the...

    Progress and Opportunity? Cellebrite Announces Third Quarter 2022 Results

    “We are pleased to report a solid third quarter, delivering strong...

    Fueling Continued Growth? Renovus Capital Acquires Advisory Business from HBR Consulting

    "The legal industry remains in the early stages of digital and...

    An eDiscovery Market Size Mashup: 2022-2027 Worldwide Software and Services Overview

    From retraction to resurgence and acceleration, the worldwide market for eDiscovery...

    On the Move? 2022 eDiscovery Market Kinetics: Five Areas of Interest

    Recently ComplexDiscovery was provided an opportunity to share with the eDiscovery...

    Trusting the Process? 2021 eDiscovery Processing Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    The Year in Review? 2021 eDiscovery Review Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    Five Great Reads on Cyber, Data, and Legal Discovery for November 2022

    From cyber shakedowns and threats to the total cost of eDiscovery...

    Five Great Reads on Cyber, Data, and Legal Discovery for October 2022

    From cyber claims and data privacy to corporate litigation and the...

    Five Great Reads on Cyber, Data, and Legal Discovery for September 2022

    From privacy legislation and special masters to acquisitions and investigations, the...

    Five Great Reads on Cyber, Data, and Legal Discovery for August 2022

    From AI and Big Data challenges to intriguing financial and investment...

    Onsite or Remote? Document Reviewer Preferences Survey (Winter 2023)

    Today CompexDiscovery expands that survey portfolio by introducing a new business...

    In The House? The Fall 2022 eDiscovery Total Cost of Ownership Survey – Final Results

    Today CompexDiscovery shares the results of a new business survey focused...

    Cold Front Concerns? Eighteen Observations on eDiscovery Business Confidence in the Fall of 2022

    In the fall of 2022, 49.0% of survey respondents felt that...

    Stereotyping Data? Issues Impacting eDiscovery Business Performance: A Fall 2022 Overview

    In the fall of 2022, 28.0% of respondents viewed increasing types...

    The Arrival of General Winter? Ukraine Conflict Assessments in Maps (November 21-27, 2022)

    According to a recent update from the Institute for the Study...

    Digging Out and Digging In? Ukraine Conflict Assessments in Maps (November 14-20, 2022)

    According to a recent update from the Institute for the Study...

    A Liberating Momentum? Ukraine Conflict Assessments in Maps (November 7-13, 2022)

    According to a recent update from the Institute for the Study...

    Rhetoric or Reality? Ukraine Conflict Assessments in Maps (November 1-6, 2022)

    According to a recent update from the Institute for the Study...