TY - GEN
T1 - Distributed big-data optimization via block-iterative convexification and averaging
AU - Notarnicola, Ivano
AU - Sun, Ying
AU - Scutari, Gesualdo
AU - Notarstefano, Giuseppe
N1 - Funding Information:
The work of Notarnicola and Notarstefano has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 638992 - OPT4SMART). The work of Sun and Scutari has been supported by the USA National Science Foundation under Grants CIF 1632599 and CIF 1719205; and in part by the Office of Naval Research under Grant N00014-16-1-2244.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/28
Y1 - 2017/6/28
N2 - In this paper, we study distributed big-data non-convex optimization in multi-Agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-Averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed.
AB - In this paper, we study distributed big-data non-convex optimization in multi-Agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-Averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed.
UR - http://www.scopus.com/inward/record.url?scp=85046280050&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046280050&partnerID=8YFLogxK
U2 - 10.1109/CDC.2017.8263982
DO - 10.1109/CDC.2017.8263982
M3 - Conference contribution
AN - SCOPUS:85046280050
T3 - 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017
SP - 2281
EP - 2288
BT - 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 56th IEEE Annual Conference on Decision and Control, CDC 2017
Y2 - 12 December 2017 through 15 December 2017
ER -