Online optimization with gradual variations

Chao Kai Chiang, Tianbao Yang, Chia Jung Lee, Mehrdad Mahdavi, Chi Jen Lu, Rong Jin, Shenghuo Zhu

Research output: Contribution to journalConference articlepeer-review

92 Scopus citations

Abstract

We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss func- Tions, according to some distance metrics. We show that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation. For the closely related problem of prediction with expert advice, we show that an online algorithm mod- ified from the multiplicative update algorithm can also achieve a similar regret bound for a different measure of deviation. Finally, for loss functions which are strictly convex, we show that an online algorithm modified from the online Newton step algorithm can achieve a regret which is only logarithmic in terms of the deviation, and as an application, we can also have such a logarithmic regret for the portfolio management problem.

Original languageEnglish (US)
Pages (from-to)6.1-6.20
JournalJournal of Machine Learning Research
Volume23
StatePublished - 2012
Event25th Annual Conference on Learning Theory, COLT 2012 - Edinburgh, United Kingdom
Duration: Jun 25 2012Jun 27 2012

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Online optimization with gradual variations'. Together they form a unique fingerprint.

Cite this