Continuous game explained

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition

Define the n-player continuous game

G=(P,C,U)

where

P={1,2,3,\ldots,n}

is the set of

n

players,

C=(C1,C2,\ldots,Cn)

where each

Ci

is a compact set, in a metric space, corresponding to the

i

th player's set of pure strategies,

U=(u1,u2,\ldots,un)

where

ui:C\to\R

is the utility function of player

i

We define

\Deltai

to be the set of Borel probability measures on

Ci

, giving us the mixed strategy space of player i.

Define the strategy profile

\boldsymbol{\sigma}=(\sigma1,\sigma2,\ldots,\sigman)

where

\sigmai\in\Deltai

Let

\boldsymbol{\sigma}-i

be a strategy profile of all players except for player

i

. As with discrete games, we can define a best response correspondence for player

i

,

bi

.

bi

is a relation from the set of all probability distributions over opponent player profiles to a set of player

i

's strategies, such that each element of

bi(\sigma-i)

is a best response to

\sigma-i

. Define

b(\boldsymbol{\sigma})=b1(\sigma-1) x b2(\sigma-2) x x bn(\sigma-n)

.A strategy profile

\boldsymbol{\sigma}*

is a Nash equilibrium if and only if

\boldsymbol{\sigma}*\inb(\boldsymbol{\sigma}*)

The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.[1] In general, there may not be a solution if we allow strategy spaces,

Ci

's which are not compact, or if we allow non-continuous utility functions.

Separable games

A separable game is a continuous game where, for any i, the utility function

ui:C\to\R

can be expressed in the sum-of-products form:

ui(s)=

m1
\sum
k1=1

\ldots

mn
\sum
kn=1
a
i,k1\ldotskn

f1(s1)\ldotsfn(sn)

, where

s\inC

,

si\inCi

,
a
i,k1\ldotskn

\in\R

, and the functions

fi:Ci\to\R

are continuous.A polynomial game is a separable game where each

Ci

is a compact interval on

\R

and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

For any separable game there exists at least one Nash equilibrium where player i mixes at most

mi+1

pure strategies.[2] Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Examples

Separable games

A polynomial game

Consider a zero-sum 2-player game between players X and Y, with

CX=CY=\left[0,1\right]

. Denote elements of

CX

and

CY

as

x

and

y

respectively. Define the utility functions

H(x,y)=ux(x,y)=-uy(x,y)

where

H(x,y)=(x-y)2

.

The pure strategy best response relations are:

bX(y)= \begin{cases} 1,&ify\in\left[0,1/2\right)\\ 0or1,&ify=1/2\\ 0,&ify\in\left(1/2,1\right] \end{cases}

bY(x)=x

bX(y)

and

bY(x)

do not intersect, so there is no pure strategy Nash equilibrium.However, there should be a mixed strategy equilibrium. To find it, express the expected value,

v=E[H(x,y)]

as a linear combination of the first and second moments of the probability distributions of X and Y:

v=\muX2-2\muX1\muY1+\muY2

(where

\muXN=E[xN]

and similarly for Y).

The constraints on

\muX1

and

\muX2

(with similar constraints for y,) are given by Hausdorff as:

\begin{align} \muX1\ge\muX2

2
\\ \mu
X1

\le\muX2\end{align}    \begin{align} \muY1\ge\muY2

2
\\ \mu
Y1

\le\muY2\end{align}

Each pair of constraints defines a compact convex subset in the plane. Since

v

is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

\mui1=\mui2or

2
\mu
i1

=\mui2

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on

\mui1=\mui2

, it will lie on the whole line, so that both 0 and 1 are a best response.

bY(\muX1,\muX2)

simply gives the pure strategy

y=\muX1

, so

bY

will never give both 0 and 1.However

bx

gives both 0 and 1 when y = 1/2.A Nash equilibrium exists when:

(\muX1*,\muX2*,\muY1*,\muY2*)=(1/2,1/2,1/2,1/4)

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

Non-Separable Games

A rational payoff function

Consider a zero-sum 2-player game between players X and Y, with

CX=CY=\left[0,1\right]

. Denote elements of

CX

and

CY

as

x

and

y

respectively. Define the utility functions

H(x,y)=ux(x,y)=-uy(x,y)

where
H(x,y)=(1+x)(1+y)(1-xy)
(1+xy)2

.

This game has no pure strategy Nash equilibrium. It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions:

F*(x)=

4
\pi

\arctan{\sqrt{x}}    G*(y)=

4
\pi

\arctan{\sqrt{y}}.

Or, equivalently, the following pair of probability density functions:

f*(x)=

2
\pi\sqrt{x

(1+x)}    g*(y)=

2
\pi\sqrt{y

(1+y)}.

The value of the game is

4/\pi

.

Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with

CX=CY=\left[0,1\right]

. Denote elements of

CX

and

CY

as

x

and

y

respectively. Define the utility functions

H(x,y)=ux(x,y)=-uy(x,y)

where
infty
H(x,y)=\sum
n=0
1
2n

\left(2xn-\left(\left(1-

x
3

\right)n-\left(

x
3

\right)n\right)\right)\left(2yn-\left(\left(1-

y
3

\right)n-\left(

y
3

\right)n\right)\right)

.This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.[4]

Further reading

See also

References

  1. I.L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of the American Mathematical Society, 3(1):170–174, February 1952.
  2. N. Stein, A. Ozdaglar and P.A. Parrilo. "Separable and Low-Rank Continuous Games". International Journal of Game Theory, 37(4):475–504, December 2008. https://arxiv.org/abs/0707.3462
  3. Irving Leonard Glicksberg & Oliver Alfred Gross (1950). "Notes on Games over the Square." Kuhn, H.W. & Tucker, A.W. eds. Contributions to the Theory of Games: Volume II. Annals of Mathematics Studies 28, p.173–183. Princeton University Press.
  4. Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." TechnicalReport D-1349, The RAND Corporation.