TY - JOUR
T1 - Decentralized dictionary learning over time-varying digraphs
AU - Daneshmand, Amir
AU - Sun, Ying
AU - Scutari, Gesualdo
AU - Facchinei, Francisco
AU - Sadler, Brian M.
N1 - Publisher Copyright:
© 2019 Amir Daneshmand, Ying Sun, Gesualdo Scutari, Francisco Facchinei, Brian M. Sadler.
PY - 2019/9/1
Y1 - 2019/9/1
N2 - This paper studies Dictionary Learning problems wherein the learning task is distributed over a multi-agent network, modeled as a time-varying directed graph. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different locations (e.g., sensors, clouds) and aggregating and/or processing all data in a fusion center might be inefficient or unfeasible, due to resource limitations, communication overheads or privacy issues. We develop a unified decentralized algorithmic framework for this class of nonconvex problems, which is proved to converge to stationary solutions at a sublinear rate. The new method hinges on Successive Convex Approximation techniques, coupled with a decentralized tracking mechanism aiming at locally estimating the gradient of the smooth part of the sum-utility. To the best of our knowledge, this is the first provably convergent decentralized algorithm for Dictionary Learning and, more generally, bi-convex problems over (time-varying) (di)graphs.
AB - This paper studies Dictionary Learning problems wherein the learning task is distributed over a multi-agent network, modeled as a time-varying directed graph. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different locations (e.g., sensors, clouds) and aggregating and/or processing all data in a fusion center might be inefficient or unfeasible, due to resource limitations, communication overheads or privacy issues. We develop a unified decentralized algorithmic framework for this class of nonconvex problems, which is proved to converge to stationary solutions at a sublinear rate. The new method hinges on Successive Convex Approximation techniques, coupled with a decentralized tracking mechanism aiming at locally estimating the gradient of the smooth part of the sum-utility. To the best of our knowledge, this is the first provably convergent decentralized algorithm for Dictionary Learning and, more generally, bi-convex problems over (time-varying) (di)graphs.
UR - http://www.scopus.com/inward/record.url?scp=85077516652&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077516652&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85077516652
SN - 1532-4435
VL - 20
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -