Abstract
This paper argues that the Explainable AI (XAI) research community needs to think harder about how to compare, measure, and describe the quality of XAI explanations. We conclude that one (or a few) explanations can be reasonably assessed with methods of the “Explanation Satisfaction” type, but that scaling up our ability to evaluate explanations requires more development of “Explanation Goodness” methods.
| Original language | English (US) |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 2582 |
| State | Published - 2020 |
| Event | 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, ExSS-ATEC 2020 - Cagliari, Italy Duration: Mar 17 2020 → … |
All Science Journal Classification (ASJC) codes
- General Computer Science
Fingerprint
Dive into the research topics of 'Position: We can measure XAI explanations better with templates'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver