TY - GEN
T1 - Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User Trust
AU - Chen, Cheng
AU - Sundar, S. Shyam
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/4/19
Y1 - 2023/4/19
N2 - To promote data transparency, frameworks such as CrowdWorkSheets encourage documentation of annotation practices on the interfaces of AI systems, but we do not know how they affect user experience. Will the quality of labeling affect perceived credibility of training data? Does the source of annotation matter? Will a credible dataset persuade users to trust a system even if it shows racial biases in its predictions? To find out, we conducted a user study (N = 430) with a prototype of a classification system, using a 2 (labeling quality: high vs. low) × 4 (source: others-as-source vs. self-as-source cue vs. self-as-source voluntary action, vs. self-as-source forced action) × 3 (AI performance: none vs. biased vs. unbiased) experiment. We found that high-quality labeling leads to higher perceived training data credibility, which in turn enhances users' trust in AI, but not when the system shows bias. Practical implications for explainable and ethical AI interfaces are discussed.
AB - To promote data transparency, frameworks such as CrowdWorkSheets encourage documentation of annotation practices on the interfaces of AI systems, but we do not know how they affect user experience. Will the quality of labeling affect perceived credibility of training data? Does the source of annotation matter? Will a credible dataset persuade users to trust a system even if it shows racial biases in its predictions? To find out, we conducted a user study (N = 430) with a prototype of a classification system, using a 2 (labeling quality: high vs. low) × 4 (source: others-as-source vs. self-as-source cue vs. self-as-source voluntary action, vs. self-as-source forced action) × 3 (AI performance: none vs. biased vs. unbiased) experiment. We found that high-quality labeling leads to higher perceived training data credibility, which in turn enhances users' trust in AI, but not when the system shows bias. Practical implications for explainable and ethical AI interfaces are discussed.
UR - http://www.scopus.com/inward/record.url?scp=85160010215&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85160010215&partnerID=8YFLogxK
U2 - 10.1145/3544548.3580805
DO - 10.1145/3544548.3580805
M3 - Conference contribution
AN - SCOPUS:85160010215
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
Y2 - 23 April 2023 through 28 April 2023
ER -