In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.
The Erlang distribution is a series of k exponential distributions all with rate
λ
λi
ith
\boldsymbol{X}i
k | |
\boldsymbol{X}=\sum | |
i=1 |
\boldsymbol{X}i
is hypoexponentially distributed. The hypoexponential has a minimum coefficient of variation of
1/k
As a result of the definition it is easier to consider this distribution as a special case of the phase-type distribution.[1] The phase-type distribution is the time to absorption of a finite state Markov process. If we have a k+1 state process, where the first k states are transient and the state k+1 is an absorbing state, then the distribution of time from the start of the process until the absorbing state is reached is phase-type distributed. This becomes the hypoexponential if we start in the first 1 and move skip-free from state i to i+1 with rate
λi
λk
\left[\begin{matrix}-λ1&λ1&0&...&0&0\\ 0&-λ2&λ2&\ddots&0&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\ddots&-λk-2&λk-2&0\\ 0&0&...&0&-λk-1&λk-1\\ 0&0&...&0&0&-λk\end{matrix}\right] .
For simplicity denote the above matrix
\Theta\equiv\Theta(λ1,...,λk)
\boldsymbol{\alpha}=(1,0,...,0)
then
Hypo(λ1,...,λk)=PH(\boldsymbol{\alpha},\Theta).
Where the distribution has two parameters (
λ1 ≠ λ2
CDF:
F(x)=1-
λ2 | |
λ2-λ1 |
-λ1x | |
e |
+
λ1 | |
λ2-λ1 |
-λ2x | |
e |
PDF:
f(x)=
λ1λ2 | |
λ1-λ2 |
(
-xλ2 | |
e |
-
-xλ1 | |
e |
)
Mean:
1 | + | |
λ1 |
1 | |
λ2 |
Variance:
1 | + | |||||
|
1 | ||||||
|
Coefficient of variation:
| ||||||||||||||||
The coefficient of variation is always less than 1.
Given the sample mean (
\bar{x}
c
λ1
λ2
λ1=
2 | |
\bar{x |
}\left[1+\sqrt{1+2(c2-1)}\right]-1
λ2=
2 | |
\bar{x |
}\left[1-\sqrt{1+2(c2-1)}\right]-1
These estimators can be derived from the methods of moments by setting
1 | + | |
λ1 |
1 | |
λ2 |
=\barx
| |||||||
.
The resulting parameters
λ1
λ2
c2\in[0.5,1]
A random variable
\boldsymbol{X}\simHypo(λ1,...,λk)
F(x)=1-\boldsymbol{\alpha}ex\Theta\boldsymbol{1}
and density function,
f(x)=-\boldsymbol{\alpha}ex\Theta\Theta\boldsymbol{1} ,
where
\boldsymbol{1}
eA
λi\neλj
i\nej
f(x)=
k | |
\sum | |
i=1 |
λi
-xλi | |
e |
k | |
\left(\prod | |
j=1,j\nei |
λj | |
λj-λi |
\right)=
k | |
\sum | |
i=1 |
\elli(0)λi
-xλi | |
e |
\ell1(x),...,\ellk(x)
λ1,...,λk
The distribution has Laplace transform of
l{L}\{f(x)\}=-\boldsymbol{\alpha}(sI-\Theta)-1\Theta\boldsymbol{1}
Which can be used to find moments,
E[Xn]=(-1)nn!\boldsymbol{\alpha}\Theta-n\boldsymbol{1} .
In the general casewhere there are
a
λ1,λ2, … ,λa
r1,r2, … ,ra
t\geq0
F(t) =1-
a | |
\left(\prod | |
j=1 |
rj | |
λ | |
j |
a | |
\right) \sum | |
k=1 |
rk | |
\sum | |
l=1 |
| |||||||||||
(rk-l)!(l-1)! |
,
with
\Psik,l(x) =-
\partiall-1 | |
\partialxl-1 |
a | |
\left(\prod | |
j=0,j ≠ k |
-rj | |
\left(λ | |
j+x\right) |
\right).
λ0=0,r0=1
This distribution has been used in population genetics,[4] cell biology,[5] [6] and queuing theory.[7] [8]