A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.
An example of a CTMC with three states
\{0,1,2\}
Ei
E0\simExp(6)
E1\simExp(12)
E2\simExp(18)
\begin{bmatrix} 0&
1 | |
2 |
&
1 | \\ | |
2 |
1 | |
3 |
&0&
2 | \\ | |
3 |
5 | |
6 |
&
1 | |
6 |
&0 \end{bmatrix}.
Equivalently, by the property of competing exponentials, this CTMC changes state from state i according to the minimum of two random variables, which are independent and such that
Ei,j\simExp(qi,j)
i ≠ j
Q=(qi,j)
\begin{bmatrix} -6&3&3\\ 4&-12&8\\ 15&3&-18 \end{bmatrix}.
qi,j
A CTMC satisfies the Markov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains.
Let
(\Omega,{\calA},\Pr)
S
T=R\ge0
T
S
R\ge0\toS
λ
S
Q
S
Q:S2\toR
i,j\inS,0\leqi,j
i\inS,
\sumj\inqi,j=-qi,i.
I
+infty
+infty
-qi,i
Q
Q:S2\toR\cup\{-infty\}
Q
\operatorname{range}Q\subseteqR
Q
\foralli\inS~\sumj\inqi,j=0,
Q ⋅ 1=0
Now, let
X:T\toS\Omega
\forallt\inT~X(t)
({\calA},{\calP}(S))
X
λ
Q
As a prelude to a transition-probability definition, we first motivate the definition of a regular rate matrix. We will use the transition-rate matrix
Q
P(t)
S
t\inR\ge0
Q
Q
Q
S
P=(etQ)t\in,
Q
S
S
Q
P
t\inT
P(t)
Q
Let
P
Q
X
λ
Q
n\ge0
t0,...,tn+1\inT
t0<...<tn+1,
i0,...,in+1\inI,
\forallA,B\in{\calA}~~\Pr(B)\ne0 → \Pr(A\capB)=\Pr(A\midB)\Pr(B),
i\inI,~\Pr(X0=i)=λi
n\ge0
t0,...,tn+1\inT
t0<...<tn+1,
i0,...,in+1\inI
0<\Pr(X0=i0,...,X
tn |
=in)
0<\Pr(X | |
tn |
=in)
It follows from continuity of the functions
(P(t)i,j)t\in
i,j\inS
(Xt(\omega))t\in
S
\Pr
N
\{\omega\in\Omega:(Xt(\omega))t\inisrightcontinuous\}\subseteqN
Let
f:T\toS
S
h=h(f)=(inf\{u\in(0,+infty):f(t+u)\nef(t)\})t\in)\cup\{+infty,0\},
H=H(f)=(h\circ
0) | |
n\inZ\ge0 |
f
s\inS,
y=y(f)=\left(\begin{cases}f(\sumk\inHk)&if\sumk\inHk<+infty,\\s&else\end{cases}\right)n\in\omega
f
The jump matrix
\Pi
\Pi(Q)
Q
Z=Z(Q)=\{k\inS:qk,k=0\}
(qk,k)k\in.
We say
X
λ
Q
X
f
X
\sum | |
n\inZ\ge0 |
H(f(\omega))n=+infty
X
y(f(\omega))
λ
\Pi(Q),
\foralln\inZ\ge0~\forallB\in{\calB}(R\ge0)~\Pr(Hn(f)\in
B)=\operatorname{Exp}(-q | |
Yn,Yn |
)(B)
We say
X
λ
Q
i\inS,
\Pr(X(0)=i)=λi
i,j
t
h
t\inT
0<\Pr(X(t)=i)
\Pr(X(t+h)=j\midX(t)=i)=[i=j]+qi,jh+o(h)
[i=j]
1
i=j
0
o(h)
i,j,h
The above equation shows that
qi,j
i
j
i ≠ j
i
i=j
Communicating classes, transience, recurrence and positive and null recurrence are defined identically as for discrete-time Markov chains.
Write P(t) for the matrix with entries pij = P(Xt = j | X0 = i). Then the matrix P(t) satisfies the forward equation, a first-order differential equation
P'(t)=P(t)Q
P(t)=etQ
In a simple case such as a CTMC on the state space . The general Q matrix for such a process is the following 2 × 2 matrix with α,β > 0
Q=\begin{pmatrix}-\alpha&\alpha\ \beta&-\beta\end{pmatrix}.
The above relation for forward matrix can be solved explicitly in this case to give
P(t)=\begin{pmatrix}
\beta | |
\alpha+\beta |
+
\alpha | |
\alpha+\beta |
e-(\alpha+\beta)t&
\alpha | |
\alpha+\beta |
-
\alpha | |
\alpha+\beta |
e-(\alpha+\beta)t\\
\beta | |
\alpha+\beta |
-
\beta | |
\alpha+\beta |
e-(\alpha+\beta)t&
\alpha | |
\alpha+\beta |
+
\beta | |
\alpha+\beta |
e-(\alpha+\beta)t\end{pmatrix}
Computing direct solutions is complicated in larger matrices. The fact that Q is the generator for a semigroup of matrices
P(t+s)=e(t+s)Q=etQesQ=P(t)P(s)
The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of t. Observe that for the two-state process considered earlier with P(t) given by
P(t)=\begin{pmatrix}
\beta | |
\alpha+\beta |
+
\alpha | |
\alpha+\beta |
e-(\alpha+\beta)t&
\alpha | |
\alpha+\beta |
-
\alpha | |
\alpha+\beta |
e-(\alpha+\beta)t\\
\beta | |
\alpha+\beta |
-
\beta | |
\alpha+\beta |
e-(\alpha+\beta)t&
\alpha | |
\alpha+\beta |
+
\beta | |
\alpha+\beta |
e-(\alpha+\beta)t\end{pmatrix}
P\pi=\begin{pmatrix}
\beta | & | |
\alpha+\beta |
\alpha | \\ | |
\alpha+\beta |
\beta | & | |
\alpha+\beta |
\alpha | |
\alpha+\beta |
\end{pmatrix}
\piQ=0
\sumi\pii=1
The image to the right describes a continuous-time Markov chain with state-space and transition-rate matrix
Q=\begin{pmatrix} -0.025&0.02&0.005\\ 0.3&-0.5&0.2\\ 0.02&0.4&-0.42 \end{pmatrix}.
\piQ=0
\pi=\begin{pmatrix}0.885&0.071&0.044\end{pmatrix}.
The image to the right describes a discrete-time Markov chain modeling Pac-Man with state-space . The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the ghosts move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition-rate matrix:
Q=\begin{pmatrix}-1&
1 | |
2 |
&&
1 | \\ | |
2 |
1 | |
4 |
&-1&
1 | |
4 |
&&
1 | &&& | |
4 |
1 | |
4 |
\\ &
1 | |
2 |
&-1&&&
1 | \\ | |
2 |
1 | |
3 |
&&&-1&
1 | |
3 |
&&
1 | |
3 |
\\ &
1 | |
4 |
&&
1 | |
4 |
&-1&
1 | |
4 |
&&
1 | |
4 |
\\ &&
1 | |
3 |
&&
1 | |
3 |
&-1&&&
1 | |
3 |
\\ &&&
1 | |
2 |
&&&-1&
1 | |
2 |
\\ &
1 | |
4 |
&&&
1 | |
4 |
&&
1 | |
4 |
&-1&
1 | |
4 |
\\ &&&&&
1 | |
2 |
&&
1 | |
2 |
&-1\end{pmatrix}
This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the ghosts can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solving
\piQ=0
\pi=(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\%.
For a CTMC Xt, the time-reversed process is defined to be
\hatXt=XT-t
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
One method of finding the stationary probability distribution,, of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by
sij=\begin{cases}
qij | |
\sumkqik |
&ifi ≠ j,\\ 0&otherwise. \end{cases}
From this, S may be written as
S=I-\left(\operatorname{diag}(Q)\right)-1Q
To find the stationary probability distribution vector, we must next find
\varphi
\varphiS=\varphi,
with
\varphi
\varphi
\|\varphi\|1
\pi={-\varphi(\operatorname{diag}(Q))-1\over\left\|\varphi(\operatorname{diag}(Q))-1\right\|1}.
(S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton - the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.