TY - GEN
T1 - When to Explain?
T2 - 2nd International Symposium on Trustworthy Autonomous Systems, TAS 2024
AU - Chen, Cheng
AU - Liao, Mengqi
AU - Sundar, S. Shyam
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/9/16
Y1 - 2024/9/16
N2 - Explanations are believed to aid understanding of AI models, but do they affect users' perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users' perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
AB - Explanations are believed to aid understanding of AI models, but do they affect users' perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users' perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
UR - http://www.scopus.com/inward/record.url?scp=85205352522&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85205352522&partnerID=8YFLogxK
U2 - 10.1145/3686038.3686066
DO - 10.1145/3686038.3686066
M3 - Conference contribution
AN - SCOPUS:85205352522
T3 - ACM International Conference Proceeding Series
BT - TAS 2024 - Proceedings of the 2nd International Symposium on Trustworthy Autonomous Systems
PB - Association for Computing Machinery
Y2 - 15 September 2024 through 18 September 2024
ER -