
Sign up to save your podcasts
Or
This paper focuses on learning efficient equilibria in repeated games. The author proposes a stochastic learning rule that selects a subgame-perfect equilibrium where players receive efficient payoffs. The paper details how this rule works, involving bounded-memory models, approximate best responses, and periodic testing of these models against observed actions. A key element is the probability of experimenting with new models being inversely related to recent payoffs. The article demonstrates that this rule selects efficient equilibria and discusses how different specifications of experimentation probabilities can lead to the Kalai–Smorodinsky and maxmin bargaining solutions.
This paper focuses on learning efficient equilibria in repeated games. The author proposes a stochastic learning rule that selects a subgame-perfect equilibrium where players receive efficient payoffs. The paper details how this rule works, involving bounded-memory models, approximate best responses, and periodic testing of these models against observed actions. A key element is the probability of experimenting with new models being inversely related to recent payoffs. The article demonstrates that this rule selects efficient equilibria and discusses how different specifications of experimentation probabilities can lead to the Kalai–Smorodinsky and maxmin bargaining solutions.