TY - JOUR
T1 - Mental Models of Mere Mortals with Explanations of Reinforcement Learning
AU - Anderson, Andrew
AU - Dodge, Jonathan
AU - Sadarangani, Amrita
AU - Juozapaitis, Zoe
AU - Newman, Evan
AU - Irvine, Jed
AU - Chattopadhyay, Souti
AU - Olson, Matthew
AU - Fern, Alan
AU - Burnett, Margaret
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/6
Y1 - 2020/6
N2 - How should reinforcement learning (RL) agents explain themselves to humans not trained in AI? To gain insights into this question, we conducted a 124-participant, four-treatment experiment to compare participants' mental models of an RL agent in the context of a simple Real-Time Strategy (RTS) game. The four treatments isolated two types of explanations vs. neither vs. both together. The two types of explanations were as follows: (1) saliency maps (an "Input Intelligibility Type"that explains the AI's focus of attention) and (2) reward-decomposition bars (an "Output Intelligibility Type"that explains the AI's predictions of future types of rewards). Our results show that a combined explanation that included saliency and reward bars was needed to achieve a statistically significant difference in participants' mental model scores over the no-explanation treatment. However, this combined explanation was far from a panacea: It exacted disproportionately high cognitive loads from the participants who received the combined explanation. Further, in some situations, participants who saw both explanations predicted the agent's next action worse than all other treatments' participants.
AB - How should reinforcement learning (RL) agents explain themselves to humans not trained in AI? To gain insights into this question, we conducted a 124-participant, four-treatment experiment to compare participants' mental models of an RL agent in the context of a simple Real-Time Strategy (RTS) game. The four treatments isolated two types of explanations vs. neither vs. both together. The two types of explanations were as follows: (1) saliency maps (an "Input Intelligibility Type"that explains the AI's focus of attention) and (2) reward-decomposition bars (an "Output Intelligibility Type"that explains the AI's predictions of future types of rewards). Our results show that a combined explanation that included saliency and reward bars was needed to achieve a statistically significant difference in participants' mental model scores over the no-explanation treatment. However, this combined explanation was far from a panacea: It exacted disproportionately high cognitive loads from the participants who received the combined explanation. Further, in some situations, participants who saw both explanations predicted the agent's next action worse than all other treatments' participants.
UR - http://www.scopus.com/inward/record.url?scp=85088311098&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088311098&partnerID=8YFLogxK
U2 - 10.1145/3366485
DO - 10.1145/3366485
M3 - Article
AN - SCOPUS:85088311098
SN - 2160-6455
VL - 10
JO - ACM Transactions on Interactive Intelligent Systems
JF - ACM Transactions on Interactive Intelligent Systems
IS - 2
M1 - 15
ER -