Abstract
The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to ∞. We establish a unified theory for a general class of non-convex penalized SVMs. We first prove that, in ultrahigh dimensions, there is one local minimizer to the objective function of non-convex penalized SVMs having the desired oracle property. We further address the problem of non-unique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultrahigh dimensional setting if an appropriate initial estimator is available. This condition on the initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence.
Original language | English (US) |
---|---|
Pages (from-to) | 53-76 |
Number of pages | 24 |
Journal | Journal of the Royal Statistical Society. Series B: Statistical Methodology |
Volume | 78 |
Issue number | 1 |
DOIs | |
State | Published - 2016 |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Statistics, Probability and Uncertainty