Position: We can measure XAI explanations better with templates

Jonathan Dodge, Margaret Burnett

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations


This paper argues that the Explainable AI (XAI) research community needs to think harder about how to compare, measure, and describe the quality of XAI explanations. We conclude that one (or a few) explanations can be reasonably assessed with methods of the “Explanation Satisfaction” type, but that scaling up our ability to evaluate explanations requires more development of “Explanation Goodness” methods.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
StatePublished - 2020
Event2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, ExSS-ATEC 2020 - Cagliari, Italy
Duration: Mar 17 2020 → …

All Science Journal Classification (ASJC) codes

  • General Computer Science

Cite this