TY - JOUR
T1 - Communicating and combating algorithmic bias
T2 - effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust
AU - Chen, Cheng
AU - Sundar, S. Shyam
N1 - Publisher Copyright:
© 2024 Taylor & Francis Group, LLC.
PY - 2024
Y1 - 2024
N2 - Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (N=597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability.
AB - Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (N=597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability.
UR - http://www.scopus.com/inward/record.url?scp=85205676016&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85205676016&partnerID=8YFLogxK
U2 - 10.1080/07370024.2024.2392494
DO - 10.1080/07370024.2024.2392494
M3 - Article
AN - SCOPUS:85205676016
SN - 0737-0024
JO - Human-Computer Interaction
JF - Human-Computer Interaction
ER -