Thu. Mar 28th, 2024

Content Assessment: NIST Considerations for Identifying and Managing Bias in Artificial Intelligence

Information - 93%
Insight - 91%
Relevance - 90%
Objectivity - 94%
Authority - 95%

93%

Excellent

A short percentage-based assessment of the qualitative benefit of the post highlighting the recent NIST special publication on identifying and managing bias in artificial intelligence.

Background Note: The National Institute of Standards and Technology (NIST) promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. Among its broad range of activities, NIST contributes to the research, standards, evaluations, and data required to advance the development, use, and assurance of trustworthy artificial intelligence (AI).

The Information Technology Laboratory (ITL) at NIST develops tests, test methods, reference data, proof of concept implementations, and technical analyses to advance the development and productive use of information technology. ITL’s responsibilities include the development of management, administrative, technical, and physical standards and guidelines. This special publication focuses on addressing and managing risks associated with bias in the design, development, and use of AI.


NIST Special Publication*

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall

Executive Summary

As individuals and communities interact in and with an environment that is increasingly virtual, they are often vulnerable to the commodification of their digital footprint. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI).

While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most impact. Whether that impact is helpful or harmful is a fundamental question in the area of Trustworthy and Responsible AI. Harmful impacts stemming from AI are not just at the individual or enterprise level but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning models across domains and industries requires concerted effort.

Current attempts for addressing the harmful effects of AI bias remain focused on computational factors such as representativeness of datasets and fairness of machine learning algorithms. These remedies are vital for mitigating bias, and more work remains. Yet, as illustrated in Fig. 1, [See Complete Paper] human and systemic institutional and societal factors are significant sources of AI bias as well and are currently overlooked. Successfully meeting this challenge will require taking all forms of bias into account. This means expanding our perspective beyond the machine learning pipeline to recognize and investigate how this technology is both created within and impacts our society.

Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair, or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI. The importance of transparency, datasets, and test, evaluation, validation, and verification (TEVV) cannot be overstated. Human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop are also important for mitigating risks related to AI bias. However, none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader socio-technical perspective that connects these practices to societal values. Experts in the area of Trustworthy and Responsible AI counsel that to successfully manage the risks of AI bias we must operationalize these values and create new norms around how AI is built and deployed. This document, and work by the National Institute of Standards and Technology (NIST) in the area of AI bias, is based on a socio-technical perspective.

The intent of this document is to surface the salient issues in the challenging area of AI bias and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias. Specifically, this special publication:

  • describes the stakes and challenge of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;
  • identifies three categories of bias in AI — systemic, statistical, and human — and describes how and where they contribute to harms;
  • describes three broad challenges for mitigating bias — datasets, testing and evaluation, and human factors — and introduces preliminary guidance for addressing them.

Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system. NIST intends to develop methods for increasing assurance, governance and practice improvements for identifying, understanding, measuring, managing, and reducing bias. To reach this goal, techniques are needed that are flexible, can be applied across contexts regardless of industry, and are easily communicated to different stakeholder groups. To contribute to the growth of this burgeoning topic area, NIST will continue its work in measuring and evaluating computational biases and seeks to create a hub for evaluating socio-technical factors. This will include development of formal guidance and standards, supporting standards development activities such as workshops and public comment periods for draft documents, and ongoing discussion of these topics with the stakeholder community.

Read the original announcement.


Complete Publication: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (PDF)

NIST.SP.1270

Read the original publication.

*Shared with permission.

Additional Reading

Source: ComplexDiscovery

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.