Quantal response equilibrium | |
Supersetof: | Nash equilibrium, Logit equilibrium |
Discoverer: | Richard McKelvey and Thomas Palfrey |
Usedfor: | Non-cooperative games |
Example: | Traveler's dilemma |
Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey,[1] [2] it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.
In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely.
The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.
When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible (though not necessarily reasonable).
The most common specification for QRE is logit equilibrium (LQRE). In a logit equilibrium, player's strategies are chosen according to the probability distribution:
Pij=
\exp(λEUij(P-i)) | |
\sumk{\exp(λEUik(P-i)) |
Pij
i
j
EUij(P-i)
i
j
P-i
Of particular interest in the logit model is the non-negative parameter λ (sometimes written as 1/μ). λ can be thought of as the rationality parameter. As λ→0, players become "completely non-rational", and play each strategy with equal probability. As λ→∞, players become "perfectly rational", and play approaches a Nash equilibrium.[4]
For dynamic (extensive form) games, McKelvey and Palfrey defined agent quantal response equilibrium (AQRE). AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.
The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. (2002) study overbidding in private-value auctions,[5] Yi (2005) explores behavior in ultimatum games,[6] Hoppe and Schmitz (2013) study the role of social preferences in principal-agent problems,[7] and Kawagoe et al. (2018) investigate step-level public goods games with binary decisions.[8] Quantal Response Equilibrium can be useful for studying the behavioral pattern of attacker and defender in a cybersecurity problem mathematically.[9]
Most tests of quantal response equilibrium are based on experiments, in which participants are not or only to a small extent incentivized to perform the task well. However, quantal response equilibrium has also been found to explain behavior in high-stakes environments. A large-scale analysis of the American television game show The Price Is Right, for example, shows that contestants behavior in the so-called Showcase Showdown, a sequential game of perfect information, can be well explained by an agent quantal response equilibrium (AQRE) model.[10]
Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations.[11] The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations.
As in statistical mechanics the mean-field approach, specifically the expectation in the exponent, results in a loss of information.[12] More generally, differences in an agent's payoff with respect to their strategy variable result in a loss of information.