Phase-type distribution explained

A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions.[1] It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases.

It has a discrete-time equivalent the discrete phase-type distribution.

The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution.

Definition

Consider a continuous-time Markov process with m + 1 states, where m ≥ 1, such that the states 1,...,m are transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of the m + 1 phases given by the probability vector (α0,α) where α0 is a scalar and α is a 1 × m vector.

The continuous phase-type distribution is the distribution of time from the above process's starting until absorption in the absorbing state.

This process can be written in the form of a transition rate matrix,

{Q}=\left[\begin{matrix}0&0\\S0&{S}\\\end{matrix}\right],

where S is an m × m matrix and S0 = –S1. Here 1 represents an m × 1 column vector with every element being 1.

Characterization

The distribution of time X until the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S).

The distribution function of X is given by,

F(x)=1-\boldsymbol{\alpha}\exp({S}x)1,

and the density function,

f(x)=\boldsymbol{\alpha}\exp({S}x)S0,

for all x > 0, where exp( · ) is the matrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by

E[Xn]=(-1)nn!\boldsymbol{\alpha}{S}-n1.

The Laplace transform of the phase type distribution is given by

M(s)=\alpha0+\boldsymbol{\alpha}(sI-S)-1

S0,

where I is the identity matrix.

Special cases

The following probability distributions are all considered special cases of a continuous phase-type distribution:

As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want.

Examples

In all the following examples it is assumed that there is no probability mass at zero, that is α0 = 0.

Exponential distribution

The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are : S = -λ and α = 1.

Hyperexponential or mixture of exponential distribution

The mixture of exponential or hyperexponential distribution with λ12,...,λn>0 can be represented as a phase type distribution with

\boldsymbol{\alpha}=(\alpha1,\alpha2,\alpha3,\alpha4,...,\alphan)

with
n
\sum
i=1

\alphai=1

and

{S}=\left[\begin{matrix}1&0&0&0&0\\0&2&0&0&0\\0&0&3&0&0\\0&0&0&4&0\\0&0&0&0&5\\\end{matrix}\right].

This mixture of densities of exponential distributed random variables can be characterized through

n
f(x)=\sum
i=1

\alphaiλi

ix
e
n\alpha
=\sum
i
f
Xi

(x),

or its cumulative distribution function

n
F(x)=1-\sum
i=1

\alphai

ix
e
n\alpha
=\sum
iF
Xi

(x).

with

Xi\simExp(λi)

Erlang distribution

The Erlang distribution has two parameters, the shape an integer k > 0 and the rate λ > 0. This is sometimes denoted E(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by making S a k×k matrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example, E(5,λ),

\boldsymbol{\alpha}=(1,0,0,0,0),

and

{S}=\left[\begin{matrix}&λ&0&0&0\\0&&λ&0&0\\0&0&&λ&0\\0&0&0&&λ\\0&0&0&0&\\\end{matrix}\right].

For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation.[2]

The hypoexponential distribution is a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case).

Mixture of Erlang distribution

The mixture of two Erlang distributions with parameter E(3,β1), E(3,β2) and (α12) (such that α1 + α2 = 1 and for each i, αi ≥ 0) can be represented as a phase type distribution with

\boldsymbol{\alpha}=(\alpha1,0,0,\alpha2,0,0),

and

{S}=\left[\begin{matrix} -\beta1&\beta1&0&0&0&0\\ 0&-\beta1&\beta1&0&0&0\\ 0&0&-\beta1&0&0&0\\ 0&0&0&-\beta2&\beta2&0\\ 0&0&0&0&-\beta2&\beta2\\ 0&0&0&0&0&-\beta2\\ \end{matrix}\right].

Coxian distribution

The Coxian distribution is a generalisation of the Erlang distribution. Instead of only being able to enter the absorbing state from state k it can be reached from any phase. The phase-type representation is given by,

S=\left[\begin{matrix}1&p1λ1&0&...&0&0\\ 0&2&p2λ2&\ddots&0&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\ddots&k-2&pk-2λk-2&0\\ 0&0&...&0&k-1&pk-1λk-1\\ 0&0&...&0&0&k\end{matrix}\right]

and

\boldsymbol{\alpha}=(1,0,...,0),

where 0 < p1,...,pk-1 ≤ 1. In the case where all pi = 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation.

The generalised Coxian distribution relaxes the condition that requires starting in the first phase.

Properties

Minima of Independent PH Random Variables

Similarly to the exponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this is here.

Generating samples from phase-type distributed random variables

BuTools includes methods for generating samples from phase-type distributed random variables.[3]

Approximating other distributions

Any distribution can be arbitrarily well approximated by a phase type distribution.[4] [5] In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance).

Fitting a phase type distribution to data

Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods.[8] Fitting a phase type distribution to heavy-tailed distributions has been shown to be practical in some situations.[9]

See also

References

Notes and References

  1. Book: Harchol-Balter . M. . Mor Harchol-Balter. Real-World Workloads: High Variability and Heavy Tails . 10.1017/CBO9781139226424.026 . Performance Modeling and Design of Computer Systems . 347–348 . 2012 . 9781139226424 .
  2. Aldous . David . David Aldous . Shepp . Larry . Lawrence Shepp . 10.1080/15326348708807067 . The least variable phase type distribution is erlang . Stochastic Models . 3 . 3 . 467 . 1987 .
  3. Book: Horváth . G. B. . Reinecke . P. . Telek . M. S. . Wolter . K. . Efficient Generation of PH-Distributed Random Variates . 10.1007/978-3-642-30782-9_19 . Analytical and Stochastic Modeling Techniques and Applications . Lecture Notes in Computer Science . 7314 . 271 . 2012 . 978-3-642-30781-2 . http://real.mtak.hu/26784/1/paper.pdf .
  4. Book: Gunter . Bolch . Stefan . Greiner . Hermann . de Meer . Kishor S.. Trivedi. Kishor S. Trivedi. 10.1002/0471200581.ch3 . Steady-State Solutions of Markov Chains . Queueing Networks and Markov Chains . 103–151 . 1998 . 0471193666 .
  5. Cox . D. R. . David Cox (statistician). A use of complex probabilities in the theory of stochastic processes . 10.1017/S0305004100030231 . Mathematical Proceedings of the Cambridge Philosophical Society. 51 . 2 . 313–319 . 2008 . 122768319 .
  6. Osogami . T. . Harchol-Balter . M.. Mor Harchol-Balter . 10.1016/j.peva.2005.06.002 . Closed form solutions for mapping general distributions to quasi-minimal PH distributions . Performance Evaluation . 63 . 6 . 524 . 2006 .
  7. Book: Casale . G. . Zhang . E. Z. . Smirni . E. . Evgenia Smirni . 10.1109/QEST.2008.33 . KPC-Toolbox: Simple Yet Effective Trace Fitting Using Markovian Arrival Processes . 2008 Fifth International Conference on Quantitative Evaluation of Systems . 83 . 2008 . 978-0-7695-3360-5 . 252444 .
  8. Book: Andreas . Lang . Jeffrey L. . Arthur . Parameter approximation for Phase-Type distributions . Matrix Analytic methods in Stochastic Models . S. . Chakravarthy. Attahiru S. . Alfa . CRC Press . 1996 . 0824797663.
  9. Ramaswami . V. . Poole . D. . Ahn . S. . Byers . S. . Kaplan . A. . Ensuring Access to Emergency Services in the Presence of Long Internet Dial-Up Calls . 10.1287/inte.1050.0155 . Interfaces . 35 . 5 . 411 . 2005 .
  10. Book: Horváth . András S. . Telek . Miklós S. . PhFit: A General Phase-Type Fitting Tool . 10.1007/3-540-46029-2_5 . Computer Performance Evaluation: Modelling Techniques and Tools . Lecture Notes in Computer Science . 2324 . 82 . 2002 . 978-3-540-43539-6 .
  11. Søren . Asmussen . Olle . Nerman . Marita . Olsson . 1996 . Fitting Phase-Type Distributions via the EM Algorithm . Scandinavian Journal of Statistics . 23 . 4 . 419–441 . 4616418.
  12. Reinecke . P. . Krauß . T. . Wolter . K. . 10.1016/j.camwa.2012.03.016 . Cluster-based fitting of phase-type distributions to empirical data . Computers & Mathematics with Applications . 64 . 12 . 3840 . 2012 . free .
  13. Book: Pérez . J. F. . Riaño . G. N. . 10.1145/1190366.1190370 . jPhase: an object-oriented tool for modeling phase-type distributions. Proceeding from the 2006 workshop on Tools for solving structured Markov chains (SMCtools '06). 2006 . 1595935061 . 7863948 .