A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions.[1] It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases.
It has a discrete-time equivalent the discrete phase-type distribution.
The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution.
Consider a continuous-time Markov process with m + 1 states, where m ≥ 1, such that the states 1,...,m are transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of the m + 1 phases given by the probability vector (α0,α) where α0 is a scalar and α is a 1 × m vector.
The continuous phase-type distribution is the distribution of time from the above process's starting until absorption in the absorbing state.
This process can be written in the form of a transition rate matrix,
{Q}=\left[\begin{matrix}0&0\\S0&{S}\\\end{matrix}\right],
where S is an m × m matrix and S0 = –S1. Here 1 represents an m × 1 column vector with every element being 1.
The distribution of time X until the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S).
The distribution function of X is given by,
F(x)=1-\boldsymbol{\alpha}\exp({S}x)1,
and the density function,
f(x)=\boldsymbol{\alpha}\exp({S}x)S0, |
for all x > 0, where exp( · ) is the matrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by
E[Xn]=(-1)nn!\boldsymbol{\alpha}{S}-n1.
The Laplace transform of the phase type distribution is given by
M(s)=\alpha0+\boldsymbol{\alpha}(sI-S)-1
S0, |
where I is the identity matrix.
The following probability distributions are all considered special cases of a continuous phase-type distribution:
As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want.
In all the following examples it is assumed that there is no probability mass at zero, that is α0 = 0.
The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are : S = -λ and α = 1.
The mixture of exponential or hyperexponential distribution with λ1,λ2,...,λn>0 can be represented as a phase type distribution with
\boldsymbol{\alpha}=(\alpha1,\alpha2,\alpha3,\alpha4,...,\alphan)
n | |
\sum | |
i=1 |
\alphai=1
{S}=\left[\begin{matrix}-λ1&0&0&0&0\\0&-λ2&0&0&0\\0&0&-λ3&0&0\\0&0&0&-λ4&0\\0&0&0&0&-λ5\\\end{matrix}\right].
This mixture of densities of exponential distributed random variables can be characterized through
n | |
f(x)=\sum | |
i=1 |
\alphaiλi
-λix | |
e |
n\alpha | |
=\sum | |
i |
f | |
Xi |
(x),
or its cumulative distribution function
n | |
F(x)=1-\sum | |
i=1 |
\alphai
-λix | |
e |
n\alpha | |
=\sum | |
iF |
Xi |
(x).
with
Xi\simExp(λi)
The Erlang distribution has two parameters, the shape an integer k > 0 and the rate λ > 0. This is sometimes denoted E(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by making S a k×k matrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example, E(5,λ),
\boldsymbol{\alpha}=(1,0,0,0,0),
{S}=\left[\begin{matrix}-λ&λ&0&0&0\\0&-λ&λ&0&0\\0&0&-λ&λ&0\\0&0&0&-λ&λ\\0&0&0&0&-λ\\\end{matrix}\right].
For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation.[2]
The hypoexponential distribution is a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case).
The mixture of two Erlang distributions with parameter E(3,β1), E(3,β2) and (α1,α2) (such that α1 + α2 = 1 and for each i, αi ≥ 0) can be represented as a phase type distribution with
\boldsymbol{\alpha}=(\alpha1,0,0,\alpha2,0,0),
and
{S}=\left[\begin{matrix} -\beta1&\beta1&0&0&0&0\\ 0&-\beta1&\beta1&0&0&0\\ 0&0&-\beta1&0&0&0\\ 0&0&0&-\beta2&\beta2&0\\ 0&0&0&0&-\beta2&\beta2\\ 0&0&0&0&0&-\beta2\\ \end{matrix}\right].
The Coxian distribution is a generalisation of the Erlang distribution. Instead of only being able to enter the absorbing state from state k it can be reached from any phase. The phase-type representation is given by,
S=\left[\begin{matrix}-λ1&p1λ1&0&...&0&0\\ 0&-λ2&p2λ2&\ddots&0&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\ddots&-λk-2&pk-2λk-2&0\\ 0&0&...&0&-λk-1&pk-1λk-1\\ 0&0&...&0&0&-λk\end{matrix}\right]
and
\boldsymbol{\alpha}=(1,0,...,0),
where 0 < p1,...,pk-1 ≤ 1. In the case where all pi = 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation.
The generalised Coxian distribution relaxes the condition that requires starting in the first phase.
Similarly to the exponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this is here.
BuTools includes methods for generating samples from phase-type distributed random variables.[3]
Any distribution can be arbitrarily well approximated by a phase type distribution.[4] [5] In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance).
Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods.[8] Fitting a phase type distribution to heavy-tailed distributions has been shown to be practical in some situations.[9]