We consider the minimization of a convex expectation-valued objective E[f(x; θ∗; ξ)] over a closed and convex set X in a regime where θ∗ is unavailable and ξ is a suitably defined random variable. Instead, θ∗ may be obtained through the solution of a learning problem that requires minimizing a metric E[g(θ; η)] in θ over a closed and convex set Θ. To resolve the absence of convergent efficient schemes, we present a coupled stochastic approximation scheme which simultaneously solves both the computational and the learning problems. The obtained schemes are shown to be equipped with almost sure convergence properties in regimes when the function f is either strongly convex or merely convex. Importantly, the scheme displays the optimal rate for both strongly convex and con- vex problems where the rate statement in the latter regime necessitates the use of averaging. In the second part of the paper, we extend these statements to a class of stochastic variational inequality problems, an object that unifies stochastic convex optimization problems and a range of stochastic equilibrium problems. Analogous almost-sure convergence statements are provided in strongly monotone and merely monotone regimes, the latter facilitated by using an iterative Tikhonov regularization. Again, we note that the schemes admit the optimal rate in strongly monotone and monotone settings, where the latter result requires the additional assumption of weak sharpness. Preliminary numerics demonstrate the performance of the prescribed schemes.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science