Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming

Hongcheng Liu, Xue Wang, Tao Yao, Runze Li, Yinyu Ye

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

The theory on the traditional sample average approximation (SAA) scheme for stochastic programming (SP) dictates that the number of samples should be polynomial in the number of problem dimensions in order to ensure proper optimization accuracy. In this paper, we study a modification to the SAA in the scenario where the global minimizer is either sparse or can be approximated by a sparse solution. By making use of a regularization penalty referred to as the folded concave penalty (FCP), we show that, if an FCP-regularized SAA formulation is solved locally, then the required number of samples can be significantly reduced in approximating the global solution of a convex SP: the sample size is only required to be poly-logarithmic in the number of dimensions. The efficacy of the FCP regularizer for nonconvex SPs is also discussed. As an immediate implication of our result, a flexible class of folded concave penalized sparse M-estimators in high-dimensional statistical learning may yield a sound performance even when the problem dimension cannot be upper-bounded by any polynomial function of the sample size.

Original languageEnglish (US)
Pages (from-to)69-108
Number of pages40
JournalMathematical Programming
Volume178
Issue number1-2
DOIs
StatePublished - Nov 1 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Mathematics(all)

Fingerprint

Dive into the research topics of 'Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming'. Together they form a unique fingerprint.

Cite this