TY - GEN
T1 - The influence of agent reliability on trust in human-agent collaboration
AU - Fan, Xiaocong
AU - Oh, Sooyoung
AU - McNeese, Michael
AU - Yen, John
AU - Cuevas, Haydee
AU - Strater, Laura
AU - Endsley, Mica R.
N1 - Copyright:
Copyright 2010 Elsevier B.V., All rights reserved.
PY - 2008
Y1 - 2008
N2 - Motivation - To investigate ways to support human-automation teams with real-world, imperfect automation where many system failures are the result of systematic failure. Research approach - An experimental approach was used to investigate how variance in agent reliability may influence human's trust and subsequent reliance on agent's decision aids. Sixty command and control (C2) teams, each consisting of a human operator and two cognitive agents, were asked to detect and respond to battlefield threats in six ten-minute scenarios. At the end of each scenario, participants completed the SAGAT queries, followed by the NASA TLX queries. Findings/Design - Results revealed that teams with experienced human operators accepted significantly less inappropriate recommendations from agents than teams with inexperienced operators. More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification of inappropriate recommendations from agents. Originality/Value - It represents an important step toward uncovering the nature of human trust in human-agent collaboration. Take away message - This research has shown that given even minimal basis for understanding when the operator should and should not trust the agent recommendations allows operators to make better AUDs, to have better situation awareness on the critical issues associated with automation error, and to establish better trust in intelligent agents.
AB - Motivation - To investigate ways to support human-automation teams with real-world, imperfect automation where many system failures are the result of systematic failure. Research approach - An experimental approach was used to investigate how variance in agent reliability may influence human's trust and subsequent reliance on agent's decision aids. Sixty command and control (C2) teams, each consisting of a human operator and two cognitive agents, were asked to detect and respond to battlefield threats in six ten-minute scenarios. At the end of each scenario, participants completed the SAGAT queries, followed by the NASA TLX queries. Findings/Design - Results revealed that teams with experienced human operators accepted significantly less inappropriate recommendations from agents than teams with inexperienced operators. More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification of inappropriate recommendations from agents. Originality/Value - It represents an important step toward uncovering the nature of human trust in human-agent collaboration. Take away message - This research has shown that given even minimal basis for understanding when the operator should and should not trust the agent recommendations allows operators to make better AUDs, to have better situation awareness on the critical issues associated with automation error, and to establish better trust in intelligent agents.
UR - http://www.scopus.com/inward/record.url?scp=77953729494&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77953729494&partnerID=8YFLogxK
U2 - 10.1145/1473018.1473028
DO - 10.1145/1473018.1473028
M3 - Conference contribution
AN - SCOPUS:77953729494
SN - 9781605583990
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the ECCE 2008
T2 - 15th European Conference on Cognitive Ergonomics the Ergonomics of Cool Interaction, ECCE 2008
Y2 - 16 September 2008 through 19 September 2008
ER -