TY - JOUR
T1 - Explanation systems for influence maximization algorithms
AU - Yadav, Amulya
AU - Rahmattalabi, Aida
AU - Kamar, Ece
AU - Vayanos, Phebe
AU - Tambe, Milind
AU - Noronha, Venil Loyd
N1 - Funding Information:
This research was supported by MURI Grant W911NF-11-1-0332.
Publisher Copyright:
Copyright © 2017 for the individual papers by the papers' authors.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017
Y1 - 2017
N2 - The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.
AB - The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.
UR - http://www.scopus.com/inward/record.url?scp=85028997041&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028997041&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85028997041
SN - 1613-0073
VL - 1893
SP - 8
EP - 19
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 3rd International Workshop on Social Influence Analysis, SocInf 2017
Y2 - 19 August 2017
ER -