Abstract
In this article, we consider the misspecified optimization problem of minimizing a convex function f(x;theta*) in x over a conic constraint set represented by h(x;theta*) in mathcal {K}, where theta*is an unknown (or misspecified) vector of parameters, mathcal {K} is a closed convex cone, and h is affine in x. Suppose that theta*is unavailable but may be learnt by a separate process that generates a sequence of estimators thetak, each of which is an increasingly accurate approximation of theta*. We develop a first-order inexact augmented Lagrangian (AL) scheme for computing an optimal solution x*corresponding to theta*while simultaneously learning theta*. In particular, we derive rate statements for such schemes when the penalty parameter sequence is either constant or increasing and derive bounds on the overall complexity in terms of proximal gradient steps when AL subproblems are inexactly solved via an accelerated proximal gradient scheme. Numerical results for a portfolio optimization problem with a misspecified covariance matrix suggest that these schemes perform well in practice, while naive sequential schemes may perform poorly in comparison.
Original language | English (US) |
---|---|
Pages (from-to) | 3981-3996 |
Number of pages | 16 |
Journal | IEEE Transactions on Automatic Control |
Volume | 67 |
Issue number | 8 |
DOIs | |
State | Published - Aug 1 2022 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering