TY - CHAP
T1 - Parallel and distributed successive convex approximation methods for big-data optimization
AU - Scutari, Gesualdo
AU - Sun, Ying
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2018.
PY - 2018
Y1 - 2018
N2 - Recent years have witnessed a surge of interest in parallel and distributed optimization methods for large-scale systems. In particular, nonconvex large-scale optimization problems have found a wide range of applications in several engineering fields. The design and the analysis of such complex, large-scale, systems pose several challenges and call for the development of new optimization models and algorithms. First, many of the aforementioned applications lead to huge-scale optimization problems. These problems are often referred to as big-data. This calls for the development of solution methods that operate in parallel, exploiting hierarchical computational architectures. Second, many networked systems are spatially (or virtually) distributed. Due to the size of such networks and often to the proprietary regulations, these systems do not possess a single central coordinator or access point that is able to solve alone the entire optimization problem. In this setting, the goal is to develop distributed solution methods that operate seamless in-network. Third, many formulations of interest are nonconvex, with nonconvex objective functions and/or constraints. Except for very special cases, computing the global optimal solution of nonconvex problems might be computationally prohibitive in several practical applications. The desiderata is designing (parallel/distributed) solution methods that are easy to implement (in the sense that the computations performed by the workers are not expensive), with provable convergence to stationary solutions (e.g., local optima) of the nonconvex problem under consideration. To this regard, a powerful and general tool is offered by the so-called Successive Convex Approximation (SCA) techniques: as proxy of the nonconvex problem, a sequence of “more tractable” (possibly convex) subproblems is solved, wherein the original nonconvex functions are replaced by properly chosen “simpler” surrogates. In this contribution, we put forth a general, unified, algorithmic framework, based on Successive Convex Approximation techniques, for the parallel and distributed solution of a general class of non-convex constrained (non-separable, networked) problems. The presented framework unifies and generalizes several existing SCA methods, making them appealing for a parallel/distributed implementation while offering a flexible selection of function approximants, step size schedules, and control of the computation/communication efficiency. This contribution is organized according to the lectures that one of the authors delivered at the CIME Summer School on Centralized and Distributed Multi-agent Optimization Models and Algorithms held in Cetraro, Italy, June 23–27, 2014. These lectures are: I) Successive Convex Approximation Methods: Basics; II) Parallel Successive Convex Approximation Methods; and III) Distributed Successive Convex Approximation Methods.
AB - Recent years have witnessed a surge of interest in parallel and distributed optimization methods for large-scale systems. In particular, nonconvex large-scale optimization problems have found a wide range of applications in several engineering fields. The design and the analysis of such complex, large-scale, systems pose several challenges and call for the development of new optimization models and algorithms. First, many of the aforementioned applications lead to huge-scale optimization problems. These problems are often referred to as big-data. This calls for the development of solution methods that operate in parallel, exploiting hierarchical computational architectures. Second, many networked systems are spatially (or virtually) distributed. Due to the size of such networks and often to the proprietary regulations, these systems do not possess a single central coordinator or access point that is able to solve alone the entire optimization problem. In this setting, the goal is to develop distributed solution methods that operate seamless in-network. Third, many formulations of interest are nonconvex, with nonconvex objective functions and/or constraints. Except for very special cases, computing the global optimal solution of nonconvex problems might be computationally prohibitive in several practical applications. The desiderata is designing (parallel/distributed) solution methods that are easy to implement (in the sense that the computations performed by the workers are not expensive), with provable convergence to stationary solutions (e.g., local optima) of the nonconvex problem under consideration. To this regard, a powerful and general tool is offered by the so-called Successive Convex Approximation (SCA) techniques: as proxy of the nonconvex problem, a sequence of “more tractable” (possibly convex) subproblems is solved, wherein the original nonconvex functions are replaced by properly chosen “simpler” surrogates. In this contribution, we put forth a general, unified, algorithmic framework, based on Successive Convex Approximation techniques, for the parallel and distributed solution of a general class of non-convex constrained (non-separable, networked) problems. The presented framework unifies and generalizes several existing SCA methods, making them appealing for a parallel/distributed implementation while offering a flexible selection of function approximants, step size schedules, and control of the computation/communication efficiency. This contribution is organized according to the lectures that one of the authors delivered at the CIME Summer School on Centralized and Distributed Multi-agent Optimization Models and Algorithms held in Cetraro, Italy, June 23–27, 2014. These lectures are: I) Successive Convex Approximation Methods: Basics; II) Parallel Successive Convex Approximation Methods; and III) Distributed Successive Convex Approximation Methods.
UR - http://www.scopus.com/inward/record.url?scp=85056582940&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056582940&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-97142-1_3
DO - 10.1007/978-3-319-97142-1_3
M3 - Chapter
AN - SCOPUS:85056582940
T3 - Lecture Notes in Mathematics
SP - 141
EP - 308
BT - Lecture Notes in Mathematics
PB - Springer Verlag
ER -