TY - JOUR
T1 - Position
T2 - 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, ExSS-ATEC 2020
AU - Dodge, Jonathan
AU - Burnett, Margaret
N1 - Funding Information:
This work was supported by DARPA #N66001-17-2-4030. We would like to acknowledge all our co-authors on the work cited here, with specific highlights to Andrew Anderson, Alan Fern, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Casey Dugan—research work does not happen in a vacuum, nor do ideas.
Publisher Copyright:
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2020
Y1 - 2020
N2 - This paper argues that the Explainable AI (XAI) research community needs to think harder about how to compare, measure, and describe the quality of XAI explanations. We conclude that one (or a few) explanations can be reasonably assessed with methods of the “Explanation Satisfaction” type, but that scaling up our ability to evaluate explanations requires more development of “Explanation Goodness” methods.
AB - This paper argues that the Explainable AI (XAI) research community needs to think harder about how to compare, measure, and describe the quality of XAI explanations. We conclude that one (or a few) explanations can be reasonably assessed with methods of the “Explanation Satisfaction” type, but that scaling up our ability to evaluate explanations requires more development of “Explanation Goodness” methods.
UR - http://www.scopus.com/inward/record.url?scp=85082951728&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082951728&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85082951728
SN - 1613-0073
VL - 2582
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
Y2 - 17 March 2020
ER -