A Satisfactory Explanation? NIST Proposes Four Principles for Explainable AI Systems

Based on recent advances in artificial intelligence (AI), AI systems have become components of high-stakes decision processes that ultimately require a level of trust for user confidence. This draft publication and solicitation for comment from NIST highlights the importance of user trust in considering AI decisions and presents four principles for explainable AI, principles designed to capture a broad set of motivations, reasons, and perspectives regarding outputs from AI systems.

en flag
nl flag
et flag
fi flag
fr flag
de flag
pt flag
ru flag
es flag

Content Assessment: NIST Proposes Four Principles for Explainable AI Systems

Information - 95%
Insight - 95%
Relevance - 90%
Objectivity - 100%
Authority - 100%

96%

Excellent

A short percentage-based assessment of the qualitative benefit of the recent post sharing NIST's proposed principles for explainable artificial intelligence.

Editor’s Note: As shared in a August 18, 2020, news release from NIST, NIST electronic engineer Jonathon Phillips notes that, “AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why. But an explanation that would satisfy an engineer might not work for someone with a different background.” It is this desire for satisfactory explanations that has resulted in the draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312). An extract from this draft publication currently open for public comment and a copy of the publication are provided for your consideration.

Four Principles of Explainable Artificial Intelligence*

Authored by P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, David Broniatowski, and Mark A. Przybocki

Introduction

With recent advances in artificial intelligence (AI), AI systems have become components of high-stakes decision processes. The nature of these decisions has spurred a drive to create algorithms, methods, and techniques to accompany outputs from AI systems with explanations. This drive is motivated in part by laws and regulations which state that decisions, including those from automated systems, provide information about the logic behind those decisions and the desire to create trustworthy AI.

Based on these calls for explainable systems, it can be assumed that the failure to articulate the rationale for an answer can affect the level of trust users will grant that system. Suspicions that the system is biased or unfair can raise concerns about harm to oneself and to society. This may slow societal acceptance and adoption of the technology, as members of the general public oftentimes place the burden of meeting societal goals on manufacturers and programmers themselves. Therefore, in terms of societal acceptance and trust, developers of AI systems may need to consider that multiple attributes of an AI system can influence public perception of the system.

Explainable AI is one of several properties that characterize trust in AI systems. Other properties include resiliency, reliability, bias, and accountability. Usually, these terms are not defined in isolation, but as a part or a set of principles or pillars. The definitions vary by author, and they focus on the norms that society expects AI systems to follow. For this paper, we state four principles encompassing the core concepts of explainable AI. These are informed by research from the fields of computer science, engineering, and psychology. In considering aspects across these fields, this report provides a set of contributions. First, we articulate the four principles of explainable AI. From a computer science perspective, we place existing explainable AI algorithms and systems into the context of these four principles. From a psychological perspective, we investigate how well people’s explanations follow our four principles. This provides a baseline comparison for progress in explainable AI.

Although these principles may affect the methods in which algorithms operate to meet explainable AI goals, the focus of the concepts is not algorithmic methods or computations themselves. Rather, we outline a set of principles that organize and review existing work in explainable AI and guide future research directions for the field. These principles support the foundation of policy considerations, safety, acceptance by society, and other aspects of AI technology.

Four Principles of Explainable AI

We present four fundamental principles for explainable AI systems. These principles are heavily influenced by considering the AI system’s interaction with the human recipient of the information. The requirements of the given situation, the task at hand, and the consumer The Fair Credit Reporting Act (FCRA) and the European Union (E.U.) General Data Protection Regulation (GDPR) Article 13. will all influence the type of explanation deemed appropriate for the situation. These situations can include, but are not limited to, regulator and legal requirements, quality control of an AI system, and customer relations. Our four principles are intended to capture a broad set of motivations, reasons, and perspectives.

Before proceeding with the principles, we need to define a key term, the output of an AI system. The output is the result of a query to an AI system. The output of a system varies by task. A loan application is an example where the output is a decision: approved or denied. For a recommendation system, the output could be a list of recommended movies. For a grammar checking system, the output is grammatical errors and recommended corrections.

Briefly, our four principles of explainable AI are:

  • Explanation:  Systems deliver accompanying evidence or reason(s) for all outputs.
  • Meaningful:  Systems provide explanations that are understandable to individual users.
  • Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
  • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.

Read the Complete Draft Publication on Explainable Artificial Intelligence (PDF)

NIST Explainable AI Draft – August 2020

Read more on Explainable Artificial Intelligence

Additional Reading

Source: ComplexDiscovery

* Published with permission.

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know and we will make our response to you a priority.

ComplexDiscovery is an online publication that highlights data and legal discovery insight and intelligence ranging from original research to aggregated news for use by business, information technology, and legal professionals. The highly targeted publication seeks to increase the collective understanding of readers regarding data and legal discovery information and issues and to provide an objective resource for considering trends, technologies, and services related to electronically stored information.

ComplexDiscovery OÜ is a technology marketing firm providing strategic planning and tactical execution expertise in support of data and legal discovery organizations. Registered as a private limited company in the European Union country of Estonia, one of the most digitally advanced countries in the world, ComplexDiscovery OÜ operates virtually worldwide to deliver marketing consulting and services.

Business as Unusual? Eighteen Observations on eDiscovery Business Confidence in the Summer of 2020

The results of the recent Summer 2020 eDiscovery Business Confidence Survey present the unfortunate and continuing impact of COVID-19 on the business of eDiscovery. However, for these pandemic-driven results to be fully understood, they should be viewed through the contextual lens of the results of all nineteen surveys that have been administered to eDiscovery professionals since the inception of the eDiscovery Business Confidence Survey in early 2016.



Check Out the Observations Now!

Interested in Contributing?

ComplexDiscovery combines original industry research with curated expert articles to create an informational resource that helps legal, business, and information technology professionals better understand the business and practice of data discovery and legal discovery.

All contributions are invested to support the development and distribution of ComplexDiscovery content. Contributors can make as many article contributions as they like, but will not be asked to register and pay until their contribution reaches $5.

eDiscovery Mergers, Acquisitions, and Investments in Q3 2020

From HaystackID and NightOwl Global to Reveal Data and NexLP, the...

Mitratech Acquires Acuity ELM

According to Mike Williams, CEO of Mitratech, “We came to the...

Veritas Acquires Globanet

“By integrating Globanet’s technology into our digital compliance portfolio, we’re making...

Five Great Reads on eDiscovery for September 2020

From cloud forensics and cyber defense to social media and surveys,...

A Running List: Top 100+ eDiscovery Providers

Based on a compilation of research from analyst firms and industry...

The eDisclosure Systems Buyers Guide – 2020 Edition (Andrew Haslam)

Authored by industry expert Andrew Haslam, the eDisclosure Buyers Guide continues...

The Race to the Starting Line? Recent Secure Remote Review Announcements

Not all secure remote review offerings are equal as the apparent...

Enabling Remote eDiscovery? A Snapshot of DaaS

Desktop as a Service (DaaS) providers are becoming important contributors to...

Home or Away? New eDiscovery Collection Market Sizing and Pricing Considerations

One of the key home (onsite) or away (remote) decisions that...

Revisions and Decisions? New Considerations for eDiscovery Secure Remote Reviews

One of the key revision and decision areas that business, legal,...

A Macro Look at Past and Projected eDiscovery Market Size from 2012 to 2024

From a macro look at past estimations of eDiscovery market size...

An eDiscovery Market Size Mashup: 2019-2024 Worldwide Software and Services Overview

While the Compound Annual Growth Rate (CAGR) for worldwide eDiscovery software...

Festive or Restive? The Fall 2020 eDiscovery Business Confidence Survey

Since January 2016, 2,189 individual responses to nineteen quarterly eDiscovery Business...

Casting a Wider Net? Predictive Coding Technologies and Protocols Survey – Fall 2020 Results

The Predictive Coding Technologies and Protocols Survey is a non-scientific semi-annual...

Business as Unusual? Eighteen Observations on eDiscovery Business Confidence in the Summer of 2020

Based on the aggregate results of nineteen past eDiscovery Business Confidence...

A Growing Concern? Budgetary Constraints and the Business of eDiscovery

In the summer of 2020, 56% of respondents viewed budgetary constraints...

eDiscovery Mergers, Acquisitions, and Investments in Q3 2020

From HaystackID and NightOwl Global to Reveal Data and NexLP, the...

Mitratech Acquires Acuity ELM

According to Mike Williams, CEO of Mitratech, “We came to the...

Veritas Acquires Globanet

“By integrating Globanet’s technology into our digital compliance portfolio, we’re making...

An eDiscovery Holiday Season Down Under? Macquarie Prepares Nuix for IPO

According to John Beveridge, writing for Small Caps, Macquarie holds a...

Five Great Reads on eDiscovery for September 2020

From cloud forensics and cyber defense to social media and surveys,...

Five Great Reads on eDiscovery for August 2020

From predictive coding and artificial intelligence to antitrust investigations and malware,...

Five Great Reads on eDiscovery for July 2020

From business confidence and operational metrics to data protection and privacy...

Five Great Reads on eDiscovery for June 2020

From collection market size updates to cloud outsourcing guidelines, the June...