Factorization bandits for interactive recommendation

Huazheng Wang, Qingyun Wu, Hongning Wang

Research output: Contribution to conferencePaperpeer-review

76 Scopus citations


We perform online interactive recommendation via a factorization-based bandit algorithm. Low-rank matrix completion is performed over an incrementally constructed useritem preference matrix, where an upper confidence bound based item selection strategy is developed to balance the exploit/explore trade-off during online learning. Observable contextual features and dependency among users (e.g., social influence) are leveraged to improve the algorithm's convergence rate and help conquer cold-start in recommendation. A high probability sublinear upper regret bound is proved for the developed algorithm, where considerable regret reduction is achieved on both user and item sides. Extensive experimentations on both simulations and large-scale real-world datasets confirmed the advantages of the proposed algorithm compared with several state-of-the-art factorization-based and bandit-based collaborative filtering methods.

Original languageEnglish (US)
Number of pages8
StatePublished - 2017
Event31st AAAI Conference on Artificial Intelligence, AAAI 2017 - San Francisco, United States
Duration: Feb 4 2017Feb 10 2017


Other31st AAAI Conference on Artificial Intelligence, AAAI 2017
Country/TerritoryUnited States
CitySan Francisco

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'Factorization bandits for interactive recommendation'. Together they form a unique fingerprint.

Cite this