Wed. Oct 5th, 2022
    en flag
    nl flag
    et flag
    fi flag
    fr flag
    de flag
    he flag
    ja flag
    lv flag
    pl flag
    pt flag
    es flag
    uk flag

    Content Assessment: NIST Considerations for Identifying and Managing Bias in Artificial Intelligence

    Information - 93%
    Insight - 91%
    Relevance - 90%
    Objectivity - 94%
    Authority - 95%

    93%

    Excellent

    A short percentage-based assessment of the qualitative benefit of the post highlighting the recent NIST special publication on identifying and managing bias in artificial intelligence.

    Background Note: The National Institute of Standards and Technology (NIST) promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. Among its broad range of activities, NIST contributes to the research, standards, evaluations, and data required to advance the development, use, and assurance of trustworthy artificial intelligence (AI).

    The Information Technology Laboratory (ITL) at NIST develops tests, test methods, reference data, proof of concept implementations, and technical analyses to advance the development and productive use of information technology. ITL’s responsibilities include the development of management, administrative, technical, and physical standards and guidelines. This special publication focuses on addressing and managing risks associated with bias in the design, development, and use of AI.


    NIST Special Publication*

    Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

    Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall

    Executive Summary

    As individuals and communities interact in and with an environment that is increasingly virtual, they are often vulnerable to the commodification of their digital footprint. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI).

    While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most impact. Whether that impact is helpful or harmful is a fundamental question in the area of Trustworthy and Responsible AI. Harmful impacts stemming from AI are not just at the individual or enterprise level but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning models across domains and industries requires concerted effort.

    Current attempts for addressing the harmful effects of AI bias remain focused on computational factors such as representativeness of datasets and fairness of machine learning algorithms. These remedies are vital for mitigating bias, and more work remains. Yet, as illustrated in Fig. 1, [See Complete Paper] human and systemic institutional and societal factors are significant sources of AI bias as well and are currently overlooked. Successfully meeting this challenge will require taking all forms of bias into account. This means expanding our perspective beyond the machine learning pipeline to recognize and investigate how this technology is both created within and impacts our society.

    Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair, or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI. The importance of transparency, datasets, and test, evaluation, validation, and verification (TEVV) cannot be overstated. Human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop are also important for mitigating risks related to AI bias. However, none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader socio-technical perspective that connects these practices to societal values. Experts in the area of Trustworthy and Responsible AI counsel that to successfully manage the risks of AI bias we must operationalize these values and create new norms around how AI is built and deployed. This document, and work by the National Institute of Standards and Technology (NIST) in the area of AI bias, is based on a socio-technical perspective.

    The intent of this document is to surface the salient issues in the challenging area of AI bias and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias. Specifically, this special publication:

    • describes the stakes and challenge of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;
    • identifies three categories of bias in AI — systemic, statistical, and human — and describes how and where they contribute to harms;
    • describes three broad challenges for mitigating bias — datasets, testing and evaluation, and human factors — and introduces preliminary guidance for addressing them.

    Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system. NIST intends to develop methods for increasing assurance, governance and practice improvements for identifying, understanding, measuring, managing, and reducing bias. To reach this goal, techniques are needed that are flexible, can be applied across contexts regardless of industry, and are easily communicated to different stakeholder groups. To contribute to the growth of this burgeoning topic area, NIST will continue its work in measuring and evaluating computational biases and seeks to create a hub for evaluating socio-technical factors. This will include development of formal guidance and standards, supporting standards development activities such as workshops and public comment periods for draft documents, and ongoing discussion of these topics with the stakeholder community.

    Read the original announcement.


    Complete Publication: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (PDF)

    NIST.SP.1270

    Read the original publication.

    *Shared with permission.

    Additional Reading

    Source: ComplexDiscovery

     

    Have a Request?

    If you have information or offering requests that you would like to ask us about, please let us know and we will make our response to you a priority.

    ComplexDiscovery is an online publication that highlights cyber, data, and legal discovery insight and intelligence ranging from original research to aggregated news for use by cybersecurity, information governance, and eDiscovery professionals. The highly targeted publication seeks to increase the collective understanding of readers regarding cyber, data, and legal discovery information and issues and to provide an objective resource for considering trends, technologies, and services related to electronically stored information.

    ComplexDiscovery OÜ is a technology marketing firm providing strategic planning and tactical execution expertise in support of cyber, data, and legal discovery organizations. Focused primarily on supporting the ComplexDiscovery publication, the company is registered as a private limited company in the European Union country of Estonia, one of the most digitally advanced countries in the world. The company operates virtually worldwide to deliver marketing consulting and services.

    Fire for Effect? Edison Partners Leads USD $30M Investment in Field Effect

    According to the media release from Edison Partners, Field Effect’s flagship...

    Cost and Loss? The 2022 NetDiligence Cyber Claims Study

    According to the media release from NetDiligence, in this year’s study,...

    Building a Cybersecurity Workforce? The European Cybersecurity Skills Framework

    According to ENISA's Executive Director, Juhan Lepassaar, "The future security of...

    Leaning Forward? The CISA 2023-2025 Strategic Plan

    The purpose of the CISA Strategic Plan is to communicate the...

    An eDiscovery Surge? Surge PE Closes Legal Tech-Enabled Services Platform Avalon

    According to Sanjay Gulati, Principal at Surge, “We are honored to...

    eDiscovery Mergers, Acquisitions, and Investments in Q3 2022

    From HaystackID and Relativity to Exterro and TCDI, the following findings,...

    Allegations and Denials? Nuix Notes ASIC Enforcement Proceedings

    The recent investor news update from Nuix on 29 September 2022,...

    Revealing Response? Nuix Responds to ASX Request for Information

    The following investor news update from Nuix shares a written response...

    On the Move? 2022 eDiscovery Market Kinetics: Five Areas of Interest

    Recently ComplexDiscovery was provided an opportunity to share with the eDiscovery...

    Trusting the Process? 2021 eDiscovery Processing Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    The Year in Review? 2021 eDiscovery Review Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    A 2021 Look at eDiscovery Collection: Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    Five Great Reads on Cyber, Data, and Legal Discovery for September 2022

    From privacy legislation and special masters to acquisitions and investigations, the...

    Five Great Reads on Cyber, Data, and Legal Discovery for August 2022

    From AI and Big Data challenges to intriguing financial and investment...

    Five Great Reads on Cyber, Data, and Legal Discovery for July 2022

    From lurking business undercurrents to captivating deepfake developments, the July 2022...

    Five Great Reads on Cyber, Data, and Legal Discovery for June 2022

    From eDiscovery ecosystem players and pricing to data breach investigations and...

    Bubble Trouble? eDiscovery Operational Metrics in the Fall of 2022

    In the fall of 2022, 89 eDiscovery Business Confidence Survey participants...

    Cooler Temperatures? Fall 2022 eDiscovery Business Confidence Survey Results

    Since January 2016, 2,874 individual responses to twenty-eight quarterly eDiscovery Business...

    Inflection or Deflection? An Aggregate Overview of Eight Semi-Annual eDiscovery Pricing Surveys

    Initiated in the winter of 2019 and conducted eight times with...

    Changing Currents? Eighteen Observations on eDiscovery Business Confidence in the Summer of 2022

    In the summer of 2022, 54.8% of survey respondents felt that...

    A Significant Operational Defeat? Ukraine Conflict Assessments in Maps (September 27 – October 1, 2022)

    According to a recent update from the Institute for the Study...

    Perception and Reality? Ukraine Conflict Assessments in Maps (September 22 – 26, 2022)

    According to a recent update from the Institute for the Study...

    Nuclear Options? Ukraine Conflict Assessments in Maps (September 17 – 21, 2022)

    According to a recent update from the Institute for the Study...

    Mass Graves and Torture Chambers? Ukraine Conflict Assessments in Maps (September 12 – 16, 2022)

    According to a recent update from the Institute for the Study...