TY - JOUR
T1 - Learning theory for distribution regression
AU - Szabó, Zoltán
AU - Sriperumbudur, Bharath K.
AU - Póczos, Barnabás
AU - Gretton, Arthur
N1 - Publisher Copyright:
© 2016 Zoltán Szab, Bharath K. Sriperumbudur, Barnabás Póczos, and Arthur Gretton.
PY - 2016/9/1
Y1 - 2016/9/1
N2 - We focus on the distribution regression problem: regressing to vector-valued outputs from probability measures. Many important machine learning and statistical tasks fit into this framework, including multi-instance learning and point estimation problems without analytical solution (such as hyperparameter or entropy estimation). Despite the large number of available heuristics in the literature, the inherent two-stage sampled nature of the problem makes the theoretical analysis quite challenging, since in practice only samples from sampled distributions are observable, and the estimates have to rely on similarities computed between sets of points. To the best of our knowledge, the only existing technique with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which often performs poorly in practice), and the domain of the distributions to be compact Euclidean. In this paper, we study a simple, analytically computable, ridge regression-based alternative to distribution regression, where we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs. Our main contribution is to prove that this scheme is consistent in the two-stage sampled setup under mild conditions (on separable topological domains enriched with kernels): we present an exact computational-statistical efficiency trade-off analysis showing that our estimator is able to match the one-stage sampled minimax op- timal rate (Caponnetto and De Vito, 2007; Steinwart et al., 2009). This result answers a 17-year-old open question, establishing the consistency of the classical set kernel (Haussler, 1999; Gartner et al., 2002) in regression. We also cover consistency for more recent kernels on distributions, including those due to Christmann and Steinwart (2010).
AB - We focus on the distribution regression problem: regressing to vector-valued outputs from probability measures. Many important machine learning and statistical tasks fit into this framework, including multi-instance learning and point estimation problems without analytical solution (such as hyperparameter or entropy estimation). Despite the large number of available heuristics in the literature, the inherent two-stage sampled nature of the problem makes the theoretical analysis quite challenging, since in practice only samples from sampled distributions are observable, and the estimates have to rely on similarities computed between sets of points. To the best of our knowledge, the only existing technique with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which often performs poorly in practice), and the domain of the distributions to be compact Euclidean. In this paper, we study a simple, analytically computable, ridge regression-based alternative to distribution regression, where we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs. Our main contribution is to prove that this scheme is consistent in the two-stage sampled setup under mild conditions (on separable topological domains enriched with kernels): we present an exact computational-statistical efficiency trade-off analysis showing that our estimator is able to match the one-stage sampled minimax op- timal rate (Caponnetto and De Vito, 2007; Steinwart et al., 2009). This result answers a 17-year-old open question, establishing the consistency of the classical set kernel (Haussler, 1999; Gartner et al., 2002) in regression. We also cover consistency for more recent kernels on distributions, including those due to Christmann and Steinwart (2010).
UR - http://www.scopus.com/inward/record.url?scp=84995467512&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84995467512&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:84995467512
SN - 1532-4435
VL - 17
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -