Abstract
Learning processes that converge to mixed-strategy equilibria often exhibit learning only in the weak sense in that the time-averaged empirical distribution of players' actions converges to a set of equilibria. A stronger notion of learning mixed equilibria is to require that players period-by-period strategies converge to a set of equilibria. A simple and intuitive method is considered for adapting algorithms that converge in the weaker sense in order to obtain convergence in the stronger sense. The adaptation is applied to the the well-known fictitious play (FP) algorithm, and the adapted version of FP is shown to converge to the set of Nash equilibria in the stronger sense for games known to have the FP property.
Original language | English (US) |
---|---|
DOIs | |
State | Published - 2014 |
Event | 2014 48th Annual Conference on Information Sciences and Systems, CISS 2014 - Princeton, NJ, United States Duration: Mar 19 2014 → Mar 21 2014 |
Other
Other | 2014 48th Annual Conference on Information Sciences and Systems, CISS 2014 |
---|---|
Country/Territory | United States |
City | Princeton, NJ |
Period | 3/19/14 → 3/21/14 |
All Science Journal Classification (ASJC) codes
- Information Systems