Passive learning with target risk

Mehrdad Mahdavi, Rong Jin

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to O(log (1/ε)), an exponential improvement compared to the sample complexity O(1/ε) for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful.

Original languageEnglish (US)
Pages (from-to)252-269
Number of pages18
JournalJournal of Machine Learning Research
Volume30
StatePublished - 2013
Event26th Conference on Learning Theory, COLT 2013 - Princeton, NJ, United States
Duration: Jun 12 2013Jun 14 2013

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Passive learning with target risk'. Together they form a unique fingerprint.

Cite this