TY - GEN
T1 - A Unified Contraction Analysis of a Class of Distributed Algorithms for Composite Optimization
AU - Xu, Jinming
AU - Sun, Ying
AU - Tian, Ye
AU - Scutari, Gesualdo
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - We study distributed composite optimization over networks: agents minimize the sum of a smooth (strongly) convex function-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Our results unify several approaches proposed in the literature of distributed optimization for special instances of our formulation. Distinguishing features of our scheme are: (i) when the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependencies on the agents' functions and the network topology are decoupled, matching the typical rates of centralized optimization; (ii) the step-size does not depend on the network parameters but only on the optimization ones; and (iii) the algorithm can adjust the ratio between the number of communications and computations to achieve the same rate of the centralized proximal gradient scheme (in terms of computations). This is the first time that a distributed algorithm applicable to composite optimization enjoys such properties.
AB - We study distributed composite optimization over networks: agents minimize the sum of a smooth (strongly) convex function-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Our results unify several approaches proposed in the literature of distributed optimization for special instances of our formulation. Distinguishing features of our scheme are: (i) when the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependencies on the agents' functions and the network topology are decoupled, matching the typical rates of centralized optimization; (ii) the step-size does not depend on the network parameters but only on the optimization ones; and (iii) the algorithm can adjust the ratio between the number of communications and computations to achieve the same rate of the centralized proximal gradient scheme (in terms of computations). This is the first time that a distributed algorithm applicable to composite optimization enjoys such properties.
UR - http://www.scopus.com/inward/record.url?scp=85082382120&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082382120&partnerID=8YFLogxK
U2 - 10.1109/CAMSAP45676.2019.9022451
DO - 10.1109/CAMSAP45676.2019.9022451
M3 - Conference contribution
AN - SCOPUS:85082382120
T3 - 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2019 - Proceedings
SP - 485
EP - 489
BT - 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2019
Y2 - 15 December 2019 through 18 December 2019
ER -