No-Regret Distributed Learning in Subnetwork Zero-Sum Games

Shijie Huang, Jinlong Lei, Yiguang Hong, Uday V. Shanbhag, Jie Chen

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we consider a distributed learning problem in a subnetwork zero-sum game, where agents are competing in different subnetworks. These agents are connected through time-varying graphs where each agent has its own cost function and can receive information from its neighbors. We propose a distributed mirror descent algorithm for computing a Nash equilibrium and establish a sublinear regret bound on the sequence of iterates when the graphs are uniformly strongly connected and the cost functions are convex-concave. Moreover, we prove its convergence with suitably selected diminishing step sizes for a strictly convex-concave cost function. We also consider a constant step-size variant of the algorithm and establish an asymptotic error bound between the cost function values of running average actions and a Nash equilibrium. In addition, we apply the algorithm to compute a mixed-strategy Nash equilibrium in subnetwork zero-sum finite-strategy games, which have merely convex-concave (to be specific, multilinear) cost functions, and obtain a final-iteration convergence result and an ergodic convergence result, respectively, under different assumptions.

Original languageEnglish (US)
Pages (from-to)6620-6635
Number of pages16
JournalIEEE Transactions on Automatic Control
Volume69
Issue number10
DOIs
StatePublished - 2024

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this