Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees

Yuyang Deng, Fuli Qiao, Mehrdad Mahdavi

Research output: Contribution to journalConference articlepeer-review

Abstract

Stochastic compositional minimax problems are prevalent in machine learning, yet there exist only limited established findings on the convergence of this class of problems. In this paper, we propose a formal definition of the stochastic compositional minimax problem, which involves optimizing a minimax loss with a compositional structure either in primal, dual, or both primal and dual variables. We introduce a simple yet effective algorithm, stochastically Corrected stOchastic gradient Descent Ascent (CODA), which is a primal-dual type algorithm with compositional correction steps, and establish its convergence rate in the aforementioned three settings. We also propose a variance reduced variant, CODA+, which achieves the best-known rate on nonconvex-strongly-concave and nonconvex-concave compositional minimax problems. This work initiates the theoretical study of the stochastic compositional minimax problem in various settings and may inform modern machine learning scenarios such as domain adaptation or robust model-agnostic meta-learning.

Original languageEnglish (US)
Pages (from-to)3835-3843
Number of pages9
JournalProceedings of Machine Learning Research
Volume258
StatePublished - 2025
Event28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand
Duration: May 3 2025May 5 2025

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees'. Together they form a unique fingerprint.

Cite this