Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation

Maria D. Molina, S. Shyam Sundar

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

When evaluating automated systems, some users apply the “positive machine heuristic” (i.e. machines are more accurate and precise than humans), whereas others apply the “negative machine heuristic” (i.e. machines lack the ability to make nuanced subjective judgments), but we do not know much about the characteristics that predict whether a user would apply the positive or negative machine heuristic. We conducted a study in the context of content moderation and discovered that individual differences relating to trust in humans, fear of artificial intelligence (AI), power usage, and political ideology can predict whether a user will invoke the positive or negative machine heuristic. For example, users who distrust other humans tend to be more positive toward machines. Our findings advance theoretical understanding of user responses to AI systems for content moderation and hold practical implications for the design of interfaces to appeal to users who are differentially predisposed toward trusting machines over humans.

Original languageEnglish (US)
JournalNew Media and Society
DOIs
StateAccepted/In press - 2022

All Science Journal Classification (ASJC) codes

  • Communication
  • Sociology and Political Science

Fingerprint

Dive into the research topics of 'Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation'. Together they form a unique fingerprint.

Cite this