In probability theory, Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time.
Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman–Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time:
If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical", then you are led to what are called jump processes.
The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small".
For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all).
The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work.[1]
William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair,in both jump and diffusion processes.[2] Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations".[3]
Other authors, such as Motoo Kimura,[4] referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted.
The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space.[1] In this formulation, it is assumed that the probabilities
P(x,s;y,t)
t>s
x,y\in\Omega
t>s,t,s\inR\ge0
For the case of a countable state space we put
i,j
x,y
\partialPij | |
\partialt |
(s;t)=\sumkPik(s;t)Akj(t)
where
A(t)
while the Kolmogorov backward equations are
\partialPij | |
\partials |
(s;t)=-\sumkPkj(s;t)Aik(s)
The functions
Pij(s;t)
i
s
j
t>s
Aij(t)
Aij(t)=\left[
\partialPij | |
\partialu |
(t;u)\right]u=t, Ajk(t)\ge0, j\nek, \sumkAjk(t)=0.
Still in the discrete state case, letting
s=0
i
Ajk(t)
pk(t)=Pik(0;t)
\sumkpk(t)=1
dpk | |
dt |
(t)=\sumjAjk(t)pj(t); pk(0)=\deltaik, k=0,1,....
For the case of a pure death process with constant rates the only nonzero coefficients are
Aj,j-1=\muj, j\ge1
\Psi(x,t)=\sumkxkpk(t),
the system of equations can in this case be recast as a partial differential equation for
{\Psi}(x,t)
\Psi(x,0)=xi
\partial\Psi | |
\partialt |
(x,t)=\mu(1-x)
\partial{\Psi | |
One example from biology is given below:[7]
pn'(t)=(n-1)\betapn-1(t)-n\betapn(t)
This equation is applied to model population growth with birth. Where
n
\beta
pn(t)=\Pr(N(t)=n)
The analytical solution is:[7]
pn(t)=(n-1)\betae-n\beta
t | |
\int | |
0 |
pn-1(s)en\betads
This is a formula for the probability
pn(t)
pn-1(t)