TY - JOUR
T1 - When AI moderates online content
T2 - Effects of human collaboration and interactive transparency on user trust
AU - Molina, Maria D.
AU - Sundar, S. Shyam
N1 - Publisher Copyright:
© 2022 The Author(s). Published by Oxford University Press on behalf of International Communication Association.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording "interactive transparency,"wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparency) × 2 (Classification Decision: Flagged, Not Flagged) between-subjects online experiment (N = 676) involving classification of hate speech and suicidal ideation. We discovered that users trust AI for the moderation of content just as much as humans, but it depends on the heuristic that is triggered when they are told AI is the source of moderation. We also found that allowing users to provide feedback to the algorithm enhances trust by increasing user agency.
AB - Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording "interactive transparency,"wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparency) × 2 (Classification Decision: Flagged, Not Flagged) between-subjects online experiment (N = 676) involving classification of hate speech and suicidal ideation. We discovered that users trust AI for the moderation of content just as much as humans, but it depends on the heuristic that is triggered when they are told AI is the source of moderation. We also found that allowing users to provide feedback to the algorithm enhances trust by increasing user agency.
UR - http://www.scopus.com/inward/record.url?scp=85132590823&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132590823&partnerID=8YFLogxK
U2 - 10.1093/jcmc/zmac010
DO - 10.1093/jcmc/zmac010
M3 - Article
AN - SCOPUS:85132590823
SN - 1083-6101
VL - 27
JO - Journal of Computer-Mediated Communication
JF - Journal of Computer-Mediated Communication
IS - 4
M1 - zmac010
ER -