Compound Poisson process explained

A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rate

λ>0

and jump size distribution G, is a process

\{Y(t):t\geq0\}

given by

Y(t)=

N(t)
\sum
i=1

Di

where,

\{N(t):t\geq0\}

is the counting variable of a Poisson process with rate

λ

, and

\{Di:i\geq1\}

are independent and identically distributed random variables, with distribution function G, which are also independent of

\{N(t):t\geq0\}.

When

Di

are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process.

Properties of the compound Poisson process

The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as:

\operatornameE(Y(t))=\operatornameE(D1++DN(t))=\operatornameE(N(t))\operatornameE(D1)=\operatornameE(N(t))\operatornameE(D)=λt\operatornameE(D).

Making similar use of the law of total variance, the variance can be calculated as:

\begin{align} \operatorname{var}(Y(t))&=\operatornameE(\operatorname{var}(Y(t)\midN(t)))+\operatorname{var}(\operatornameE(Y(t)\midN(t)))\\[5pt] &=\operatornameE(N(t)\operatorname{var}(D))+\operatorname{var}(N(t)\operatornameE(D))\\[5pt] &=\operatorname{var}(D)\operatornameE(N(t))+\operatornameE(D)2\operatorname{var}(N(t))\\[5pt] &=\operatorname{var}(D)λt+\operatornameE(D)t\\[5pt] &=λt(\operatorname{var}(D)+\operatornameE(D)2)\\[5pt] &=λt\operatornameE(D2). \end{align}

Lastly, using the law of total probability, the moment generating function can be given as follows:

\Pr(Y(t)=i)=\sumn\Pr(Y(t)=i\midN(t)=n)\Pr(N(t)=n)

\begin{align} \operatornameE(esY)&=\sumiesi\Pr(Y(t)=i)\\[5pt] &=\sumiesi\sumn\Pr(Y(t)=i\midN(t)=n)\Pr(N(t)=n)\\[5pt] &=\sumn\Pr(N(t)=n)\sumiesi\Pr(Y(t)=i\midN(t)=n)\\[5pt] &=\sumn\Pr(N(t)=n)\sumiesi\Pr(D1+D2++Dn=i)\\[5pt] &=\sumn\Pr(N(t)=n)

n
M
D(s)

\\[5pt] &=\sumn\Pr(N(t)=n)

nln(MD(s))
e

\\[5pt] &=MN(t)(ln(MD(s)))\\[5pt] &=

λt\left(MD(s)-1\right)
e

. \end{align}

Exponentiation of measures

Let N, Y, and D be as above. Let μ be the probability measure according to which D is distributed, i.e.

\mu(A)=\Pr(D\inA).

Let δ0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y(t) is the measure

\exp(λt(\mu-\delta0))

where the exponential exp(ν) of a finite measure ν on Borel subsets of the real line is defined by

\exp(\nu)=

infty
\sum
n=0

{\nu*n\overn!}

and

\nu*n=\underbrace{\nu**\nu}n

is a convolution of measures, and the series converges weakly.

See also