TY - GEN
T1 - A Unified Algorithmic Framework for Distributed Composite Optimization
AU - Xu, Jinming
AU - Tian, Ye
AU - Sun, Ying
AU - Scutari, Gesualdo
N1 - Funding Information:
‡School of Industrial Engineering, Purdue University, West-Lafayette, IN, USA. Emails: <tian110, sun578, gscutari> @purdue.edu. This work has been supported in part by the Fundamental Research Funds for the Central Universities (Zhejiang University NGICS Platform and K20200237); and in parts by the USA NSF Grants CIF 1632599 and CIF 1564044; and the ARO Grant W911NF1810238.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/12/14
Y1 - 2020/12/14
N2 - We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general unified algorithmic framework for such a class of problems, whose unified convergence analysis leverages the theory of operator splitting. Distinguishing features of our scheme are: (i) When the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled, matching the typical rates of centralized optimization; the rate expression improves on existing results; (ii) When the objective function is convex (but not strongly convex), similar separation as in (i) is established for the coefficient of the proved sublinear rate; and (iii) A by-product of our analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms yielding the fastest provable (worst-case) convergence rate. This is the first time that a general distributed algorithmic framework applicable to composite optimization enjoys all such properties.
AB - We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general unified algorithmic framework for such a class of problems, whose unified convergence analysis leverages the theory of operator splitting. Distinguishing features of our scheme are: (i) When the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled, matching the typical rates of centralized optimization; the rate expression improves on existing results; (ii) When the objective function is convex (but not strongly convex), similar separation as in (i) is established for the coefficient of the proved sublinear rate; and (iii) A by-product of our analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms yielding the fastest provable (worst-case) convergence rate. This is the first time that a general distributed algorithmic framework applicable to composite optimization enjoys all such properties.
UR - http://www.scopus.com/inward/record.url?scp=85099883457&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099883457&partnerID=8YFLogxK
U2 - 10.1109/CDC42340.2020.9303791
DO - 10.1109/CDC42340.2020.9303791
M3 - Conference contribution
AN - SCOPUS:85099883457
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 2309
EP - 2316
BT - 2020 59th IEEE Conference on Decision and Control, CDC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 59th IEEE Conference on Decision and Control, CDC 2020
Y2 - 14 December 2020 through 18 December 2020
ER -