Content Assessment: NIST Proposes Four Principles for Explainable AI Systems

Information - 95%
Insight - 95%
Relevance - 90%
Objectivity - 100%
Authority - 100%

96%

Excellent

A short percentage-based assessment of the qualitative benefit of the recent post sharing NIST's proposed principles for explainable artificial intelligence.

Editor’s Note: As shared in a August 18, 2020, news release from NIST, NIST electronic engineer Jonathon Phillips notes that, “AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why. But an explanation that would satisfy an engineer might not work for someone with a different background.” It is this desire for satisfactory explanations that has resulted in the draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312). An extract from this draft publication currently open for public comment and a copy of the publication are provided for your consideration.

Four Principles of Explainable Artificial Intelligence*

Authored by P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, David Broniatowski, and Mark A. Przybocki

Introduction

With recent advances in artificial intelligence (AI), AI systems have become components of high-stakes decision processes. The nature of these decisions has spurred a drive to create algorithms, methods, and techniques to accompany outputs from AI systems with explanations. This drive is motivated in part by laws and regulations which state that decisions, including those from automated systems, provide information about the logic behind those decisions and the desire to create trustworthy AI.

Based on these calls for explainable systems, it can be assumed that the failure to articulate the rationale for an answer can affect the level of trust users will grant that system. Suspicions that the system is biased or unfair can raise concerns about harm to oneself and to society. This may slow societal acceptance and adoption of the technology, as members of the general public oftentimes place the burden of meeting societal goals on manufacturers and programmers themselves. Therefore, in terms of societal acceptance and trust, developers of AI systems may need to consider that multiple attributes of an AI system can influence public perception of the system.

Explainable AI is one of several properties that characterize trust in AI systems. Other properties include resiliency, reliability, bias, and accountability. Usually, these terms are not defined in isolation, but as a part or a set of principles or pillars. The definitions vary by author, and they focus on the norms that society expects AI systems to follow. For this paper, we state four principles encompassing the core concepts of explainable AI. These are informed by research from the fields of computer science, engineering, and psychology. In considering aspects across these fields, this report provides a set of contributions. First, we articulate the four principles of explainable AI. From a computer science perspective, we place existing explainable AI algorithms and systems into the context of these four principles. From a psychological perspective, we investigate how well people’s explanations follow our four principles. This provides a baseline comparison for progress in explainable AI.

Although these principles may affect the methods in which algorithms operate to meet explainable AI goals, the focus of the concepts is not algorithmic methods or computations themselves. Rather, we outline a set of principles that organize and review existing work in explainable AI and guide future research directions for the field. These principles support the foundation of policy considerations, safety, acceptance by society, and other aspects of AI technology.

Four Principles of Explainable AI

We present four fundamental principles for explainable AI systems. These principles are heavily influenced by considering the AI system’s interaction with the human recipient of the information. The requirements of the given situation, the task at hand, and the consumer The Fair Credit Reporting Act (FCRA) and the European Union (E.U.) General Data Protection Regulation (GDPR) Article 13. will all influence the type of explanation deemed appropriate for the situation. These situations can include, but are not limited to, regulator and legal requirements, quality control of an AI system, and customer relations. Our four principles are intended to capture a broad set of motivations, reasons, and perspectives.

Before proceeding with the principles, we need to define a key term, the output of an AI system. The output is the result of a query to an AI system. The output of a system varies by task. A loan application is an example where the output is a decision: approved or denied. For a recommendation system, the output could be a list of recommended movies. For a grammar checking system, the output is grammatical errors and recommended corrections.

Briefly, our four principles of explainable AI are:

  • Explanation:  Systems deliver accompanying evidence or reason(s) for all outputs.
  • Meaningful:  Systems provide explanations that are understandable to individual users.
  • Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
  • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.

Read the Complete Draft Publication on Explainable Artificial Intelligence (PDF)

NIST Explainable AI Draft – August 2020

Read more on Explainable Artificial Intelligence

Additional Reading

Source: ComplexDiscovery

* Published with permission.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, DALL-E2, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.