Sun. Sep 25th, 2022
    en flag
    nl flag
    et flag
    fi flag
    fr flag
    de flag
    he flag
    ja flag
    lv flag
    pl flag
    pt flag
    es flag
    uk flag

    Content Assessment: Platforms Dream of Electronic Shepherds? Combatting Online Harms Through Innovation (FTC)

    Information - 95%
    Insight - 96%
    Relevance - 91%
    Objectivity - 92%
    Authority - 92%

    93%

    Excellent

    A short percentage-based assessment of the qualitative benefit of the recent FTC report to Congress on the topic of combatting online harms through innovation.

    Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.

    To submit recommendations for consideration and inclusion in ComplexDiscovery’s cyber, data, and legal discovery-centric service, product, or research announcements, contact us today.


    Background Note: Recently the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The report may be beneficial for cybersecurity, information governance, and legal discovery professionals seeking to better understand the concerns and challenges of combatting online harms as outlined and accentuated by the Federal Trade Commission.

    Federal Trade Commission Report*

    Combatting Online Harms Through Innovation – FTC Report to Congress

    Executive Summary Extract

    The deployment of AI tools intended to detect or otherwise address harmful online content is accelerating. Largely within the confines — or via funding from — the few big technology companies that have the necessary resources and infrastructure, AI tools are being conceived, developed, and used for purposes including combat against many of the harms listed by Congress. Given the amount of online content at issue, this result appears to be inevitable, as a strictly human alternative is impossible or extremely costly at scale.

    Nonetheless, it is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment. Their use — both now and in the future — raises a host of persistent legal and policy concerns. The key conclusion of this report is thus that governments, platforms, and others must exercise great caution in either mandating the use of, or over-relying on, these tools even for the important purpose of reducing harms. Although outside of our scope, this conclusion implies that, if AI is not the answer and if the scale makes meaningful human oversight infeasible, we must look at other ways, regulatory or otherwise, to address the spread of these harms.

    A central failing of these tools is that the datasets supporting them are often not robust or accurate enough to avoid false positives or false negatives. Part of the problem is that automated systems are trained on previously identified data and then have problems identifying new phenomena (e.g., misinformation about COVID-19). Mistaken outcomes may also result from problems with how a given algorithm is designed. Another issue is that the tools use proxies that stand in for some actual type of content, even though that content is often too complex, dynamic, and subjective to capture, no matter what amount and quality of data one has collected. In fact, the way that researchers classify content in the training data generally includes removing complexity and context — the very things that in some cases the tools need to distinguish between content that is or is not harmful. These challenges mean that developers and operators of these tools are necessarily reactive and that the tools — assuming they work — need constant adjustment even when they are built to make their own adjustments.

    The limitations of these tools go well beyond merely inaccurate results. In some instances, increased accuracy could itself lead to other harms, such as enabling increasingly invasive forms of surveillance. Even with good intentions, their use can also lead to exacerbating harms via bias, discrimination, and censorship. Again, these results may reflect problems with the training data (possibly chosen or classified based on flawed judgments or mislabeled by insufficiently trained workers), the algorithmic design, or preconceptions that data scientists introduce inadvertently. They can also result from the fact that some content is subject to different and shifting meanings, especially across different cultures and languages. These bad outcomes may also depend on who is using the tools and their incentives for doing so, and on whether the tool is being used for a purpose other than the specific one for which it was built.

    Further, as these AI tools are developed and deployed, those with harmful agendas — whether adversarial nations, violent extremists, criminals, or other bad actors — seek actively to evade and manipulate them, often using their own sophisticated tools. This state of affairs, often referred to as an arms race or cat-and-mouse game, is a common aspect of many kinds of new technology, such as in the area of cybersecurity. This unfortunate feature will not be going away, and the main struggle here is to ensure that adversaries are not in the lead. This task includes considering possible evasions and manipulations at the tool development stage and being vigilant about them after deployment. However, this brittleness in the tools — the fact that they can fail with even small modifications to inputs — may be an inherent flaw.

    While AI continues to advance in this area, including with existing government support, all of these significant concerns suggest that Congress, regulators, platforms, scientists, and others should exercise great care and focus attention on several related considerations.

    Read the announcement of the report.


    Read the Complete Report: Combatting Online Harms Through Innovation (PDF) – Mouseover to Scroll

    Combatting Online Harms Through Innovation; Federal Trade Commission Report to Congress

    Read the original report.


    *Shared with permission.

    Additional Reading

    Source: ComplexDiscovery

     

    Have a Request?

    If you have information or offering requests that you would like to ask us about, please let us know and we will make our response to you a priority.

    ComplexDiscovery is an online publication that highlights cyber, data, and legal discovery insight and intelligence ranging from original research to aggregated news for use by cybersecurity, information governance, and eDiscovery professionals. The highly targeted publication seeks to increase the collective understanding of readers regarding cyber, data, and legal discovery information and issues and to provide an objective resource for considering trends, technologies, and services related to electronically stored information.

    ComplexDiscovery OÜ is a technology marketing firm providing strategic planning and tactical execution expertise in support of cyber, data, and legal discovery organizations. Focused primarily on supporting the ComplexDiscovery publication, the company is registered as a private limited company in the European Union country of Estonia, one of the most digitally advanced countries in the world. The company operates virtually worldwide to deliver marketing consulting and services.

    Leaning Forward? The CISA 2023-2025 Strategic Plan

    The purpose of the CISA Strategic Plan is to communicate the...

    Continuous Risk Improvement? Q3 Cyber Round-Up From Cowbell Cyber

    According to Manu Singh, director of risk engineering at Cowbell, "Every...

    A Comprehensive Cyber Discovery Resource? The DoD Cybersecurity Policy Chart from CSIAC

    The Cyber Security and Information Systems Information Analysis Center (CSIAC) is...

    Rapidly Evolving Cyber Insurance? Q2 Cyber Round-Up From Cowbell Cyber

    According to Isabelle Dumont, SVP of Marketing and Technology Partners at...

    Revealing Response? Nuix Responds to ASX Request for Information

    The following investor news update from Nuix shares a written response...

    Revealing Reports? Nuix Notes Press Speculation

    According to a September 9, 2022 market release from Nuix, the...

    Regards to Broadway? HaystackID® Acquires Business Intelligence Associates

    According to HaystackID CEO Hal Brooks, “BIA is a leader in...

    One Large Software and Cloud Business? OpenText to Acquire Micro Focus

    According to OpenText CEO & CTO Mark J. Barrenechea, “We are...

    On the Move? 2022 eDiscovery Market Kinetics: Five Areas of Interest

    Recently ComplexDiscovery was provided an opportunity to share with the eDiscovery...

    Trusting the Process? 2021 eDiscovery Processing Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    The Year in Review? 2021 eDiscovery Review Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    A 2021 Look at eDiscovery Collection: Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    Five Great Reads on Cyber, Data, and Legal Discovery for September 2022

    From privacy legislation and special masters to acquisitions and investigations, the...

    Five Great Reads on Cyber, Data, and Legal Discovery for August 2022

    From AI and Big Data challenges to intriguing financial and investment...

    Five Great Reads on Cyber, Data, and Legal Discovery for July 2022

    From lurking business undercurrents to captivating deepfake developments, the July 2022...

    Five Great Reads on Cyber, Data, and Legal Discovery for June 2022

    From eDiscovery ecosystem players and pricing to data breach investigations and...

    Cooler Temperatures? Fall 2022 eDiscovery Business Confidence Survey Results

    Since January 2016, 2,874 individual responses to twenty-eight quarterly eDiscovery Business...

    Inflection or Deflection? An Aggregate Overview of Eight Semi-Annual eDiscovery Pricing Surveys

    Initiated in the winter of 2019 and conducted eight times with...

    Changing Currents? Eighteen Observations on eDiscovery Business Confidence in the Summer of 2022

    In the summer of 2022, 54.8% of survey respondents felt that...

    Challenging Variants? Issues Impacting eDiscovery Business Performance: A Summer 2022 Overview

    In the summer of 2022, 28.8% of respondents viewed increasing types...

    Nuclear Options? Ukraine Conflict Assessments in Maps (September 17 – 21, 2022)

    According to a recent update from the Institute for the Study...

    Mass Graves and Torture Chambers? Ukraine Conflict Assessments in Maps (September 12 – 16, 2022)

    According to a recent update from the Institute for the Study...

    On The Run? Ukraine Conflict Assessments in Maps (September 7 – 11, 2022)

    According to a recent update from the Institute for the Study...

    Tangible Degradation? Ukraine Conflict Assessments in Maps (September 2 – 6, 2022)

    According to a recent update from the Institute for the Study...