Quasi-stationary distribution explained

In probability a quasi-stationary distribution is a random process that admits one or several absorbing states that are reached almost surely, but is initially distributed such that it can evolve for a long time without reaching it. The most common example is the evolution of a population: the only equilibrium is when there is no one left, but if we model the number of people it is likely to remain stable for a long period of time before it eventually collapses.

Formal definition

We consider a Markov process

(Yt)t

taking values in

l{X}

. There is a measurable set

l{X}tr

of absorbing states and

l{X}a=l{X}\setminusl{X}\operatorname{tr

}. We denote by

T

the hitting time of

l{X}\operatorname{tr

}, also called killing time. We denote by

\{\operatorname{P}x\midx\inl{X}\}

the family of distributions where

\operatorname{P}x

has original condition

Y0=x\inl{X}

. We assume that

l{X}\operatorname{tr

} is almost surely reached, i.e.

\forallx\inl{X},\operatorname{P}x(T<infty)=1

.

The general definition[1] is: a probability measure

\nu

on

l{X}a

is said to be a quasi-stationary distribution (QSD) if for every measurable set

B

contained in

l{X}a

, \forall t \geq 0, \operatorname_\nu(Y_t \in B \mid T > t) = \nu(B)where

\operatorname{P}\nu=

a}
\int
l{X

\operatorname{P}xd\nu(x)

.

In particular

\forallB\inl{B}(l{X}a),\forallt\geq0,\operatorname{P}\nu(Yt\inB,T>t)=\nu(B)\operatorname{P}\nu(T>t).

General results

Killing time

From the assumptions above we know that the killing time is finite with probability 1. A stronger result than we can derive is that the killing time is exponentially distributed:[2] if

\nu

is a QSD then there exists

\theta(\nu)>0

such that

\forallt\inN,\operatorname{P}\nu(T>t)=\exp(-\theta(\nu) x t)

.

Moreover, for any

\vartheta<\theta(\nu)

we get
\varthetat
\operatorname{E}
\nu(e

)<infty

.

Existence of a quasi-stationary distribution

Most of the time the question asked is whether a QSD exists or not in a given framework. From the previous results we can derive a condition necessary to this existence.

Let

*
\theta
x

:=\sup\{\theta\mid

\thetaT
\operatorname{E}
x(e

)<infty\}

. A necessary condition for the existence of a QSD is

\existsx\inl{X}a,

*
\theta
x

>0

and we have the equality
*
\theta
x

=\liminft-

1
t

log(\operatorname{P}x(T>t)).

Moreover, from the previous paragraph, if

\nu

is a QSD then

\operatorname{E}\nu\left(e\theta(\nu)T\right)=infty

. As a consequence, if

\vartheta>0

satisfies
a}
\sup
x\inl{X

\{

\varthetaT
\operatorname{E}
x(e

)\}<infty

then there can be no QSD

\nu

such that

\vartheta=\theta(\nu)

because other wise this would lead to the contradiction

infty=\operatorname{E}\nu\left(e\theta(\nu)T\right)\leq

a}
\sup
x\inl{X

\{

\theta(\nu)T
\operatorname{E}
x(e

)\}<infty

.

(Pt,t\geq0)

of the process before killing. Then, under the conditions that

l{X}a

is a compact Hausdorff space and that

P1

preserves the set of continuous functions, i.e.
a))
P
1(l{C}(l{X}

\subseteql{C}(l{X}a)

, there exists a QSD.

History

The works of Wright on gene frequency in 1931[3] and of Yaglom on branching processes in 1947[4] already included the idea of such distributions. The term quasi-stationarity applied to biological systems was then used by Bartlett in 1957,[5] who later coined "quasi-stationary distribution".[6]

Quasi-stationary distributions were also part of the classification of killed processes given by Vere-Jones in 1962[7] and their definition for finite state Markov chains was done in 1965 by Darroch and Seneta.[8]

Examples

Quasi-stationary distributions can be used to model the following processes:

Notes and References

  1. Book: Collet. Pierre. Martínez. Servet. San Martín. Jaime. Quasi-Stationary Distributions . en-gb. 10.1007/978-3-642-33131-2. Probability and its Applications. 2013 . 2013. 978-3-642-33130-5.
  2. 1427713. Advances in Applied Probability. 24. 4. 795–813. en. Ferrari. Pablo A.. Existence of Non-Trivial Quasi-Stationary Distributions in the Birth-Death Chain. Martínez. Servet. Picco. Pierre. 1992. 10.2307/1427713. 17018407 .
  3. WRIGHT, Sewall. Evolution in Mendelian populations. Genetics, 1931, vol. 16, no 2, pp. 97–159.
  4. YAGLOM, Akiva M. Certain limit theorems of the theory of branching random processes. In : Doklady Akad. Nauk SSSR (NS). 1947. p. 3.
  5. BARTLETT, Mi S. On theoretical models for competitive and predatory biological systems. Biometrika, 1957, vol. 44, no 1/2, pp. 27–42.
  6. BARTLETT, Maurice Stevenson. Stochastic population models; in ecology and epidemiology. 1960.
  7. VERE-JONES. D.. 1962-01-01. The Quarterly Journal of Mathematics. en. 13. 1. 7–28. 10.1093/qmath/13.1.7. 0033-5606. Geometric Ergodicity in Denumerable Markov Chains. 1962QJMat..13....7V. 10338.dmlcz/102037. free.
  8. Darroch. J. N.. Eugene Seneta. Seneta. E.. 1965. On Quasi-Stationary Distributions in Absorbing Discrete-Time Finite Markov Chains. 3211876. Journal of Applied Probability. 2. 1. 88–100. 10.2307/3211876. 67838782 .