When the machine learns from users, is it helping or snooping?

Sangwook Lee, Won Ki Moon, Jae Gil Lee, S. Shyam Sundar

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Media systems that personalize their offerings keep track of users’ tastes by constantly learning from their activities. Some systems use this characteristic of machine learning to encourage users with statements like “the more you use the system, the better it can serve you in the future.” However, it is not clear whether users indeed feel encouraged and consider the system to be helpful and beneficial, or begin to worry about jeopardizing their privacy in the process. We conducted a between-subjects experiment (N = 269) to find out. Guided by the HAII-TIME model (Sundar, 2020), we examined the effects of both explicit and implicit cues on the interface which conveyed that the machine is learning. Data indicate that users consider the system to be a helper and tend to trust it more when the system is transparent about its learning, regardless of the quality of its performance and the degree of explicitness in conveying the fact that it is learning from their activities. The study found no evidence to suggest privacy concerns arising from the machine disclosing that it is learning from its users. We discuss theoretical and practical implications of deploying machine learning cues to enhance user experience of AI-embedded systems.

Original languageEnglish (US)
Article number107427
JournalComputers in Human Behavior
Volume138
DOIs
StatePublished - Jan 2023

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Human-Computer Interaction
  • General Psychology

Fingerprint

Dive into the research topics of 'When the machine learns from users, is it helping or snooping?'. Together they form a unique fingerprint.

Cite this