TY - GEN
T1 - Norm emergence under constrained interactions in diverse societies
AU - Mukherjee, Partha
AU - Sen, Sandip
AU - Airiau, Stéphane
PY - 2008
Y1 - 2008
N2 - Effective norms, emerging from sustained individual interactions over time, can complement societal rules and significantly enhance performance of individual agents and agent societies. Researchers have used a model that supports the emergence of social norms via learning from interaction experiences where each interaction is viewed as a stage game. In this social learning model, which is distinct from an agent learning from repeated interactions against the same player, an agent learns a policy to play the game from repeated interactions with multiple learning agents. The key research question is to characterize when and how the entire population of homogeneous learners converge to a consistent norm when multiple action combinations yield the same optimal payoff. In this paper we study two extensions to the social learning model that significantly enhances its applicability. We first explore the effects of heterogeneous populations where different agents may be using different learning algorithms. We also investigate norm emergence when agent interactions are physically constrained. We consider agents located on a grid where an agent is more likely to interact with other agents situated closer to it than those that are situated afar. The key new results include the surprising acceleration in learning with limited interaction ranges. We also study the effects of pure-strategy players, i.e., non-learners in the environment.
AB - Effective norms, emerging from sustained individual interactions over time, can complement societal rules and significantly enhance performance of individual agents and agent societies. Researchers have used a model that supports the emergence of social norms via learning from interaction experiences where each interaction is viewed as a stage game. In this social learning model, which is distinct from an agent learning from repeated interactions against the same player, an agent learns a policy to play the game from repeated interactions with multiple learning agents. The key research question is to characterize when and how the entire population of homogeneous learners converge to a consistent norm when multiple action combinations yield the same optimal payoff. In this paper we study two extensions to the social learning model that significantly enhances its applicability. We first explore the effects of heterogeneous populations where different agents may be using different learning algorithms. We also investigate norm emergence when agent interactions are physically constrained. We consider agents located on a grid where an agent is more likely to interact with other agents situated closer to it than those that are situated afar. The key new results include the surprising acceleration in learning with limited interaction ranges. We also study the effects of pure-strategy players, i.e., non-learners in the environment.
UR - http://www.scopus.com/inward/record.url?scp=84899983206&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84899983206&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84899983206
SN - 9781605604701
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 765
EP - 772
BT - 7th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2008
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
T2 - 7th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2008
Y2 - 12 May 2008 through 16 May 2008
ER -