TY - JOUR
T1 - Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
AU - Liu, Hongcheng
AU - Wang, Xue
AU - Yao, Tao
AU - Li, Runze
AU - Ye, Yinyu
N1 - Funding Information:
The authors are indebted to the referees and the associate editor for their valuable comments, which have significantly improved the paper. This research was partially supported by National Science Foundation Grant Nos. CMMI 1300638 and DMS 1512422, National Institute on Drug Abuse (NIDA) Grant Nos. P50 DA039838 and P50 DA036107, National Library of Medicine, T32 LM012415, National Institute of Allergy and Infectious Diseases, U19AI089672, Institute of CyberScience Seed Grant, Grace Woodward Grants for Collaborative Research in Engineering and Medicine, the Pennsylvania State University. This work was also partially supported by National Natural Science Foundation of China Grant Nos. 71528005 and 11690015. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF, the NIDA, the NIH, or the NNSFC. Part of the research was done when Liu was working as a postdoctoral research fellow at the Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA.
Publisher Copyright:
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society.
PY - 2019/11/1
Y1 - 2019/11/1
N2 - The theory on the traditional sample average approximation (SAA) scheme for stochastic programming (SP) dictates that the number of samples should be polynomial in the number of problem dimensions in order to ensure proper optimization accuracy. In this paper, we study a modification to the SAA in the scenario where the global minimizer is either sparse or can be approximated by a sparse solution. By making use of a regularization penalty referred to as the folded concave penalty (FCP), we show that, if an FCP-regularized SAA formulation is solved locally, then the required number of samples can be significantly reduced in approximating the global solution of a convex SP: the sample size is only required to be poly-logarithmic in the number of dimensions. The efficacy of the FCP regularizer for nonconvex SPs is also discussed. As an immediate implication of our result, a flexible class of folded concave penalized sparse M-estimators in high-dimensional statistical learning may yield a sound performance even when the problem dimension cannot be upper-bounded by any polynomial function of the sample size.
AB - The theory on the traditional sample average approximation (SAA) scheme for stochastic programming (SP) dictates that the number of samples should be polynomial in the number of problem dimensions in order to ensure proper optimization accuracy. In this paper, we study a modification to the SAA in the scenario where the global minimizer is either sparse or can be approximated by a sparse solution. By making use of a regularization penalty referred to as the folded concave penalty (FCP), we show that, if an FCP-regularized SAA formulation is solved locally, then the required number of samples can be significantly reduced in approximating the global solution of a convex SP: the sample size is only required to be poly-logarithmic in the number of dimensions. The efficacy of the FCP regularizer for nonconvex SPs is also discussed. As an immediate implication of our result, a flexible class of folded concave penalized sparse M-estimators in high-dimensional statistical learning may yield a sound performance even when the problem dimension cannot be upper-bounded by any polynomial function of the sample size.
UR - http://www.scopus.com/inward/record.url?scp=85046443048&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046443048&partnerID=8YFLogxK
U2 - 10.1007/s10107-018-1278-0
DO - 10.1007/s10107-018-1278-0
M3 - Article
C2 - 31680704
AN - SCOPUS:85046443048
SN - 0025-5610
VL - 178
SP - 69
EP - 108
JO - Mathematical Programming
JF - Mathematical Programming
IS - 1-2
ER -