Abstract
Some view transparency as a cure to the challenge of human-robot trust calibration. This point of view considers a person’s trust in a robot to be little more than a reflection of the robot’s performance. A transparent robot capable of explaining its behavior will thus result in correct trust calibration. This chapter argues that this simple calculus ignores critical determinants of trust such as individual differences (human and robot), social and contextual factors, and, most importantly, human psychology itself. We examine how these factors influence the success of an explanation and begin to outline a program of research by which an autonomous robot might tailor its explanation to an audience. Moreover, we consider the impact that cognitive laziness of humans in real-world environments will have on the tendency to trust a robot and the ethical ramifications of creating robots that mold their explanations to the person.
| Original language | English (US) |
|---|---|
| Title of host publication | Trust in Human-Robot Interaction |
| Publisher | Elsevier |
| Pages | 197-208 |
| Number of pages | 12 |
| ISBN (Electronic) | 9780128194720 |
| DOIs | |
| State | Published - Jan 1 2020 |
All Science Journal Classification (ASJC) codes
- General Psychology