TY - JOUR
T1 - APPROXIMATE KERNEL PCA
T2 - COMPUTATIONAL VERSUS STATISTICAL TRADE-OFF
AU - Sriperumbudur, Bharath K.
AU - Sterge, Nicholas
N1 - Funding Information:
Funding. BKS is supported by National Science Foundation (NSF) award DMS-1713011 and CAREER award DMS-1945396.
Publisher Copyright:
© Institute of Mathematical Statistics, 2022.
PY - 2022/10
Y1 - 2022/10
N2 - Kernel methods are powerful learning methodologies that allow to perform nonlinear data analysis. Despite their popularity, they suffer from poor scalability in big data scenarios. Various approximation methods, including random feature approximation, have been proposed to alleviate the problem. However, the statistical consistency of most of these approximate kernel methods is not well understood except for kernel ridge regression wherein it has been shown that the random feature approximation is not only computationally efficient but also statistically consistent with a minimax optimal rate of convergence. In this paper, we investigate the efficacy of random feature approximation in the context of kernel principal component analysis (KPCA) by studying the trade-off between computational and statistical behaviors of approximate KPCA. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces. The analysis hinges on Bernstein-type inequalities for the operator and Hilbert–Schmidt norms of a self-adjoint Hilbert–Schmidt operator-valued U-statistics, which are of independent interest.
AB - Kernel methods are powerful learning methodologies that allow to perform nonlinear data analysis. Despite their popularity, they suffer from poor scalability in big data scenarios. Various approximation methods, including random feature approximation, have been proposed to alleviate the problem. However, the statistical consistency of most of these approximate kernel methods is not well understood except for kernel ridge regression wherein it has been shown that the random feature approximation is not only computationally efficient but also statistically consistent with a minimax optimal rate of convergence. In this paper, we investigate the efficacy of random feature approximation in the context of kernel principal component analysis (KPCA) by studying the trade-off between computational and statistical behaviors of approximate KPCA. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces. The analysis hinges on Bernstein-type inequalities for the operator and Hilbert–Schmidt norms of a self-adjoint Hilbert–Schmidt operator-valued U-statistics, which are of independent interest.
UR - http://www.scopus.com/inward/record.url?scp=85144862222&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144862222&partnerID=8YFLogxK
U2 - 10.1214/22-AOS2204
DO - 10.1214/22-AOS2204
M3 - Article
AN - SCOPUS:85144862222
SN - 0090-5364
VL - 50
SP - 2713
EP - 2736
JO - Annals of Statistics
JF - Annals of Statistics
IS - 5
ER -