Strong convergence to mixed equilibria in fictitious play

Brian Swenson, Soummya Kar, João Xavier

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

Learning processes that converge to mixed-strategy equilibria often exhibit learning only in the weak sense in that the time-averaged empirical distribution of players' actions converges to a set of equilibria. A stronger notion of learning mixed equilibria is to require that players period-by-period strategies converge to a set of equilibria. A simple and intuitive method is considered for adapting algorithms that converge in the weaker sense in order to obtain convergence in the stronger sense. The adaptation is applied to the the well-known fictitious play (FP) algorithm, and the adapted version of FP is shown to converge to the set of Nash equilibria in the stronger sense for games known to have the FP property.

Original languageEnglish (US)
DOIs
StatePublished - 2014
Event2014 48th Annual Conference on Information Sciences and Systems, CISS 2014 - Princeton, NJ, United States
Duration: Mar 19 2014Mar 21 2014

Other

Other2014 48th Annual Conference on Information Sciences and Systems, CISS 2014
Country/TerritoryUnited States
CityPrinceton, NJ
Period3/19/143/21/14

All Science Journal Classification (ASJC) codes

  • Information Systems

Cite this