A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.
In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.
Define the n-player continuous game
G=(P,C,U)
P={1,2,3,\ldots,n}
n
C=(C1,C2,\ldots,Cn)
Ci
i
U=(u1,u2,\ldots,un)
ui:C\to\R
i
We define
\Deltai
Ci
Define the strategy profile
\boldsymbol{\sigma}=(\sigma1,\sigma2,\ldots,\sigman)
\sigmai\in\Deltai
\boldsymbol{\sigma}-i
i
i
bi
bi
i
bi(\sigma-i)
\sigma-i
b(\boldsymbol{\sigma})=b1(\sigma-1) x b2(\sigma-2) x … x bn(\sigma-n)
\boldsymbol{\sigma}*
\boldsymbol{\sigma}*\inb(\boldsymbol{\sigma}*)
Ci
A separable game is a continuous game where, for any i, the utility function
ui:C\to\R
ui(s)=
m1 | |
\sum | |
k1=1 |
\ldots
mn | |
\sum | |
kn=1 |
a | |
i,k1\ldotskn |
f1(s1)\ldotsfn(sn)
s\inC
si\inCi
a | |
i,k1\ldotskn |
\in\R
fi:Ci\to\R
Ci
\R
In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:
For any separable game there exists at least one Nash equilibrium where player i mixes at most
mi+1
Consider a zero-sum 2-player game between players X and Y, with
CX=CY=\left[0,1\right]
CX
CY
x
y
H(x,y)=ux(x,y)=-uy(x,y)
H(x,y)=(x-y)2
The pure strategy best response relations are:
bX(y)= \begin{cases} 1,&ify\in\left[0,1/2\right)\\ 0or1,&ify=1/2\\ 0,&ify\in\left(1/2,1\right] \end{cases}
bY(x)=x
bX(y)
bY(x)
v=E[H(x,y)]
v=\muX2-2\muX1\muY1+\muY2
(where
\muXN=E[xN]
The constraints on
\muX1
\muX2
\begin{align} \muX1\ge\muX2
2 | |
\\ \mu | |
X1 |
\le\muX2\end{align} \begin{align} \muY1\ge\muY2
2 | |
\\ \mu | |
Y1 |
\le\muY2\end{align}
Each pair of constraints defines a compact convex subset in the plane. Since
v
\mui1=\mui2or
2 | |
\mu | |
i1 |
=\mui2
Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on
\mui1=\mui2
bY(\muX1,\muX2)
y=\muX1
bY
bx
(\muX1*,\muX2*,\muY1*,\muY2*)=(1/2,1/2,1/2,1/4)
This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.
Consider a zero-sum 2-player game between players X and Y, with
CX=CY=\left[0,1\right]
CX
CY
x
y
H(x,y)=ux(x,y)=-uy(x,y)
H(x,y)= | (1+x)(1+y)(1-xy) |
(1+xy)2 |
.
This game has no pure strategy Nash equilibrium. It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions:
F*(x)=
4 | |
\pi |
\arctan{\sqrt{x}} G*(y)=
4 | |
\pi |
\arctan{\sqrt{y}}.
Or, equivalently, the following pair of probability density functions:
f*(x)=
2 | |
\pi\sqrt{x |
(1+x)} g*(y)=
2 | |
\pi\sqrt{y |
(1+y)}.
The value of the game is
4/\pi
Consider a zero-sum 2-player game between players X and Y, with
CX=CY=\left[0,1\right]
CX
CY
x
y
H(x,y)=ux(x,y)=-uy(x,y)
infty | |
H(x,y)=\sum | |
n=0 |
1 | |
2n |
\left(2xn-\left(\left(1-
x | |
3 |
\right)n-\left(
x | |
3 |
\right)n\right)\right)\left(2yn-\left(\left(1-
y | |
3 |
\right)n-\left(
y | |
3 |
\right)n\right)\right)