TY - GEN
T1 - Learning equilibria in constrained Nash-Cournot games with misspecified demand functions
AU - Jiang, Hao
AU - Shanbhag, Uday V.
AU - Meyn, Sean P.
PY - 2011
Y1 - 2011
N2 - We consider a constrained Nash-Cournot oligopoly where the demand function is linear. While cost functions and capacities are public information, firms only have partial information regarding the demand function. Specifically, firms either know the intercept or the slope of the demand function and cannot observe aggregate output. We consider a learning process in which firms update their profit-maximizing quantities and their beliefs regarding the unknown demand function parameters, based on disparities between observed and estimated prices. A characterization of the mappings, corresponding to the fixed point of the learning process, is provided. This result paves the way for developing a Tikhonov regularization scheme that is shown to learn the correct equilibrium, in spite of the multiplicity of equilibria. Despite the absence of monotonicity of the gradient maps, we prove the convergence of constant and diminishing steplength distributed gradient schemes under a suitable caveat on the starting points. Notably, precise rate of convergence estimates are provided for the constant steplength schemes.
AB - We consider a constrained Nash-Cournot oligopoly where the demand function is linear. While cost functions and capacities are public information, firms only have partial information regarding the demand function. Specifically, firms either know the intercept or the slope of the demand function and cannot observe aggregate output. We consider a learning process in which firms update their profit-maximizing quantities and their beliefs regarding the unknown demand function parameters, based on disparities between observed and estimated prices. A characterization of the mappings, corresponding to the fixed point of the learning process, is provided. This result paves the way for developing a Tikhonov regularization scheme that is shown to learn the correct equilibrium, in spite of the multiplicity of equilibria. Despite the absence of monotonicity of the gradient maps, we prove the convergence of constant and diminishing steplength distributed gradient schemes under a suitable caveat on the starting points. Notably, precise rate of convergence estimates are provided for the constant steplength schemes.
UR - http://www.scopus.com/inward/record.url?scp=84860678839&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84860678839&partnerID=8YFLogxK
U2 - 10.1109/CDC.2011.6161248
DO - 10.1109/CDC.2011.6161248
M3 - Conference contribution
AN - SCOPUS:84860678839
SN - 9781612848006
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 1018
EP - 1023
BT - 2011 50th IEEE Conference on Decision and Control and European Control Conference, CDC-ECC 2011
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2011 50th IEEE Conference on Decision and Control and European Control Conference, CDC-ECC 2011
Y2 - 12 December 2011 through 15 December 2011
ER -