Explanation systems for influence maximization algorithms

Amulya Yadav, Aida Rahmattalabi, Ece Kamar, Phebe Vayanos, Milind Tambe, Venil Loyd Noronha

Research output: Contribution to journalConference articlepeer-review

Abstract

The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.

Original languageEnglish (US)
Pages (from-to)8-19
Number of pages12
JournalCEUR Workshop Proceedings
Volume1893
StatePublished - 2017
Event3rd International Workshop on Social Influence Analysis, SocInf 2017 - Melbourne, Australia
Duration: Aug 19 2017 → …

All Science Journal Classification (ASJC) codes

  • General Computer Science

Cite this