Lower and upper bounds on the generalization of stochastic exponentially concave optimization

Mehrdad Mahdavi, Lijun Zhang, Rong Jin

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d log T/T ) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T ) lower bound on the excess risk of any stochastic optimization method for squared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume40
Issue number2015
StatePublished - 2015
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: Jul 2 2015Jul 6 2015

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Lower and upper bounds on the generalization of stochastic exponentially concave optimization'. Together they form a unique fingerprint.

Cite this