TY - JOUR
T1 - Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks
AU - Xu, Jinming
AU - Tian, Ye
AU - Sun, Ying
AU - Scutari, Gesualdo
N1 - Funding Information:
This work has been supported by the following grants: NSF of USA under Grants CIF 1719205 and CMMI 1832688; in part by the Army Research Office under Grant W911NF1810238; and in part by NSF of China under Grants U1909207 and 61922058.
Funding Information:
This work has been supported by the following grants: NSF of USA under Grants CIF 1719205 and CMMI 1832688; in part by the Army Research O ce under Grant W911NF1810238; and in part by NSF of China under Grants U1909207 and 61922058.
Publisher Copyright:
Copyright © 2020 by the author(s)
PY - 2020
Y1 - 2020
N2 - This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications. We provide a unified analysis of their convergence rate, measured in terms of the Bregman distance associated to the saddle point reformation of the distributed optimization problem. When acceleration is employed, the rate is shown to be optimal, in the sense that it matches (under the proposed metric) existing complexity lower bounds of distributed algorithms applicable to such a class of problem and using only gradient information and gossip communications. Preliminary numerical results on distributed least-square regression problems show that the proposed algorithm compares favorably on existing distributed schemes.
AB - This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications. We provide a unified analysis of their convergence rate, measured in terms of the Bregman distance associated to the saddle point reformation of the distributed optimization problem. When acceleration is employed, the rate is shown to be optimal, in the sense that it matches (under the proposed metric) existing complexity lower bounds of distributed algorithms applicable to such a class of problem and using only gradient information and gossip communications. Preliminary numerical results on distributed least-square regression problems show that the proposed algorithm compares favorably on existing distributed schemes.
UR - http://www.scopus.com/inward/record.url?scp=85161856681&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161856681&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85161856681
SN - 2640-3498
VL - 108
SP - 2381
EP - 2391
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020
Y2 - 26 August 2020 through 28 August 2020
ER -