TY - GEN
T1 - User Trust in Recommendation Systems
T2 - 2022 CHI Conference on Human Factors in Computing Systems, CHI 2022
AU - Liao, Mengqi
AU - Sundar, S. Shyam
AU - Walther, Joseph B.
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/4/29
Y1 - 2022/4/29
N2 - Three of the most common approaches used in recommender systems are content-based filtering (matching users' preferences with products' characteristics), collaborative filtering (matching users with similar preferences), and demographic filtering (catering to users based on demographic characteristics). Do users' intuitions lead them to trust one of these approaches over others, independent of the actual operations of these different systems? Does their faith in one type or another depend on the quality of the recommendation, rather than how the recommendation appears to have been derived? We conducted an empirical study with a prototype of a movie recommender system to find out. A 3 (Ostensible Recommender Type: Content vs. Collaborative vs. Demographic Filtering) x 2 (Recommendation Quality: Good vs. Bad) experiment (N=226) investigated how users evaluate systems and attribute responsibility for the recommendations they receive. We found that users trust systems that use collaborative filtering more, regardless of the system's performance. They think that they themselves are responsible for good recommendations but that the system is responsible for bad recommendations (reflecting a self-serving bias). Theoretical insights, design implications and practical solutions for the cold start problem are discussed.
AB - Three of the most common approaches used in recommender systems are content-based filtering (matching users' preferences with products' characteristics), collaborative filtering (matching users with similar preferences), and demographic filtering (catering to users based on demographic characteristics). Do users' intuitions lead them to trust one of these approaches over others, independent of the actual operations of these different systems? Does their faith in one type or another depend on the quality of the recommendation, rather than how the recommendation appears to have been derived? We conducted an empirical study with a prototype of a movie recommender system to find out. A 3 (Ostensible Recommender Type: Content vs. Collaborative vs. Demographic Filtering) x 2 (Recommendation Quality: Good vs. Bad) experiment (N=226) investigated how users evaluate systems and attribute responsibility for the recommendations they receive. We found that users trust systems that use collaborative filtering more, regardless of the system's performance. They think that they themselves are responsible for good recommendations but that the system is responsible for bad recommendations (reflecting a self-serving bias). Theoretical insights, design implications and practical solutions for the cold start problem are discussed.
UR - http://www.scopus.com/inward/record.url?scp=85130526493&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130526493&partnerID=8YFLogxK
U2 - 10.1145/3491102.3501936
DO - 10.1145/3491102.3501936
M3 - Conference contribution
AN - SCOPUS:85130526493
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2022 - Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
Y2 - 30 April 2022 through 5 May 2022
ER -