Multilevel stochastic gradient methods for nested composition optimization

Shuoguang Yang, Mengdi Wang, Ethan X. Fang

Research output: Contribution to journalArticlepeer-review

33 Scopus citations

Abstract

Stochastic gradient methods are scalable for solving large-scale optimization problems that involve empirical expectations of loss functions. Existing results mainly apply to optimization problems where the objectives are one- or two-level expectations. In this paper, we consider the multilevel composition optimization problem that involves compositions of multilevel component functions and nested expectations over a random path. This finds applications in risk-averse optimization and sequential planning. We propose a class of multilevel stochastic gradient methods that are motivated by the method of multitimescale stochastic approximation. First, we propose a basic T -level stochastic compositional gradient algorithm. Then we develop accelerated multilevel stochastic gradient methods by using an extrapolation-interpolation scheme to take advantage of the smoothness of individual component functions. When all component functions are smooth, we show that the convergence rate improves to O(n-4=(7+T )) for general objectives and O(n-4=(3+T )) for strongly convex objectives. We also provide almost sure convergence and rate of convergence results for nonconvex problems. The proposed methods and theoretical results are validated using numerical experiments.

Original languageEnglish (US)
Pages (from-to)616-659
Number of pages44
JournalSIAM Journal on Optimization
Volume29
Issue number1
DOIs
StatePublished - 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Multilevel stochastic gradient methods for nested composition optimization'. Together they form a unique fingerprint.

Cite this