|
Content Assessment: NIST Proposes Four Principles for Explainable AI Systems
Information - 95%
Insight - 95%
Relevance - 90%
Objectivity - 100%
Authority - 100%
96%
Excellent
A short percentage-based assessment of the qualitative benefit of the recent post sharing NIST's proposed principles for explainable artificial intelligence.
Editor’s Note: As shared in a August 18, 2020, news release from NIST, NIST electronic engineer Jonathon Phillips notes that, “AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why. But an explanation that would satisfy an engineer might not work for someone with a different background.” It is this desire for satisfactory explanations that has resulted in the draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312). An extract from this draft publication currently open for public comment and a copy of the publication are provided for your consideration.
Four Principles of Explainable Artificial Intelligence*
Authored by P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, David Broniatowski, and Mark A. Przybocki
Introduction
With recent advances in artificial intelligence (AI), AI systems have become components of high-stakes decision processes. The nature of these decisions has spurred a drive to create algorithms, methods, and techniques to accompany outputs from AI systems with explanations. This drive is motivated in part by laws and regulations which state that decisions, including those from automated systems, provide information about the logic behind those decisions and the desire to create trustworthy AI.
Based on these calls for explainable systems, it can be assumed that the failure to articulate the rationale for an answer can affect the level of trust users will grant that system. Suspicions that the system is biased or unfair can raise concerns about harm to oneself and to society. This may slow societal acceptance and adoption of the technology, as members of the general public oftentimes place the burden of meeting societal goals on manufacturers and programmers themselves. Therefore, in terms of societal acceptance and trust, developers of AI systems may need to consider that multiple attributes of an AI system can influence public perception of the system.
Explainable AI is one of several properties that characterize trust in AI systems. Other properties include resiliency, reliability, bias, and accountability. Usually, these terms are not defined in isolation, but as a part or a set of principles or pillars. The definitions vary by author, and they focus on the norms that society expects AI systems to follow. For this paper, we state four principles encompassing the core concepts of explainable AI. These are informed by research from the fields of computer science, engineering, and psychology. In considering aspects across these fields, this report provides a set of contributions. First, we articulate the four principles of explainable AI. From a computer science perspective, we place existing explainable AI algorithms and systems into the context of these four principles. From a psychological perspective, we investigate how well people’s explanations follow our four principles. This provides a baseline comparison for progress in explainable AI.
Although these principles may affect the methods in which algorithms operate to meet explainable AI goals, the focus of the concepts is not algorithmic methods or computations themselves. Rather, we outline a set of principles that organize and review existing work in explainable AI and guide future research directions for the field. These principles support the foundation of policy considerations, safety, acceptance by society, and other aspects of AI technology.
Four Principles of Explainable AI
We present four fundamental principles for explainable AI systems. These principles are heavily influenced by considering the AI system’s interaction with the human recipient of the information. The requirements of the given situation, the task at hand, and the consumer The Fair Credit Reporting Act (FCRA) and the European Union (E.U.) General Data Protection Regulation (GDPR) Article 13. will all influence the type of explanation deemed appropriate for the situation. These situations can include, but are not limited to, regulator and legal requirements, quality control of an AI system, and customer relations. Our four principles are intended to capture a broad set of motivations, reasons, and perspectives.
Before proceeding with the principles, we need to define a key term, the output of an AI system. The output is the result of a query to an AI system. The output of a system varies by task. A loan application is an example where the output is a decision: approved or denied. For a recommendation system, the output could be a list of recommended movies. For a grammar checking system, the output is grammatical errors and recommended corrections.
Briefly, our four principles of explainable AI are:
- Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
- Meaningful: Systems provide explanations that are understandable to individual users.
- Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
- Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.
Read the Complete Draft Publication on Explainable Artificial Intelligence (PDF)
NIST Explainable AI Draft – August 2020Read more on Explainable Artificial Intelligence
Additional Reading
- A New Model for Cybersecurity? NIST Details Framework for Zero Trust Architecture
- Challenged by Privacy? The NIST Privacy Framework
Source: ComplexDiscovery