Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


Artificial intelligence (AI) has become prevalent in our everyday technologies and impacts both individuals and communities. The explainable AI (XAI) scholarship has explored the philosophical nature of explanation and technical explanations, which are usually driven by experts in lab settings and can be challenging for laypersons to understand. In addition, existing XAI research tends to focus on the individual level. Little is known about how people understand and explain AI-led decisions in the community context. Drawing from XAI and activity theory, a foundational HCI theory, we theorize how explanation is situated in a community's shared values, norms, knowledge, and practices, and how situated explanation mediates community-AI interaction. We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions, most of which are automated punishments. Lastly, we discuss the implications of this framework at the intersection of CSCW, HCI, and XAI.

Original languageEnglish (US)
Article number102
JournalProceedings of the ACM on Human-Computer Interaction
Issue numberCSCW2
StatePublished - Oct 14 2020

All Science Journal Classification (ASJC) codes

  • Social Sciences (miscellaneous)
  • Human-Computer Interaction
  • Computer Networks and Communications


Dive into the research topics of 'Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation'. Together they form a unique fingerprint.

Cite this