How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

Hua Shen, Ting Hao Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Scopus citations

Abstract

Explaining to users why automated systems make certain mistakes is important and challenging. Researchers have proposed ways to automatically produce interpretations for deep neural network models. However, it is unclear how useful these interpretations are in helping users figure out why they are getting an error. If an interpretation effectively explains to users how the underlying deep neural network model works, people who were presented with the interpretation should be better at predicting the model’s outputs than those who were not. This paper presents an investigation on whether or not showing machine-generated visual interpretations helps users understand the incorrectly predicted labels produced by image classifiers. We showed the images and the correct labels to 150 online crowd workers and asked them to select the incorrectly predicted labels with or without showing them the machine-generated visual interpretations. The results demonstrated that displaying the visual interpretations did not increase, but rather decreased, the average guessing accuracy by roughly 10%.

Original languageEnglish (US)
Title of host publicationHCOMP 2020 - Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing
Editors Lora Aroyo, Elena Simperl
PublisherAssociation for the Advancement of Artificial Intelligence
Pages168-172
Number of pages5
ISBN (Print)9781577358480
DOIs
StatePublished - 2020
Event8th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2020 - Virtual, Online
Duration: Oct 25 2020Oct 29 2020

Publication series

NameProceedings of the AAAI Conference on Human Computation and Crowdsourcing
Volume8
ISSN (Print)2769-1330
ISSN (Electronic)2769-1349

Conference

Conference8th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2020
CityVirtual, Online
Period10/25/2010/29/20

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction

Cite this