Generalized extreme value distribution explained

In probability theory and statistics, the generalized extreme value (GEV) distribution[1] is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables.[2] Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

In some fields of application the generalized extreme value distribution is known as the Fisher–Tippett distribution, named after Ronald Fisher and L. H. C. Tippett who recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of the Gumbel distribution. The origin of the common functional form for all 3 distributions dates back to at least Jenkinson, A. F. (1955),[3] though allegedly[4] it could also have been given by von Mises, R. (1936).[5]

Specification

Using the standardized variable

s\equiv

x-\mu
\sigma

,

where

\mu,

the location parameter, can be any real number, and

\sigma>0 

is the scale parameter; the cumulative distribution function of the GEV distribution is then

F(\ s;\ \xi\) = \begin \exp\! \Bigl(-e^ \Bigr) & ~~ \text ~~ \xi = 0\, \\ \\\exp\! \Bigl(- \bigl(1 + \xi s \bigr)^\Bigr) & ~~ \text ~~ \xi \neq 0 ~~ \text ~~ \xi\ s > -1\, \\ \\0 & ~~ \text ~~ \xi > 0 ~~ \text ~~ s \le -\tfrac\, \\ \\1 & ~~ \text ~~ \xi < 0 ~~ \text ~~ s \ge \tfrac\ ; \end

where

\xi,

the shape parameter, can be any real number. Thus, for

\xi>0 ,

the expression is valid for

s>-\tfrac{ 1 }{\xi},

while for

\xi<0 

it is valid for

s<-\tfrac{ 1 }{\xi}~.

In the first case,

 -\tfrac{ 1 }{\xi}

is the negative, lower end-point, where

F

is in the second case,

 -\tfrac{ 1 }{\xi}

is the positive, upper end-point, where

F

is 1. For

\xi=0 

the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, as

\xi\to0 

in which case

s

can be any real number.

In the special case of

x=\mu,

so

s=0 

and

F( 0;\xi)=e-1  ≈ 0.368 

for whatever values

\xi

and

\sigma

might have.

The probability density function of the standardized distribution is

f(\ s;\ \xi\) = \begin e^ \exp\! \Bigl(-e^ \Bigr) & ~~ \text ~~ \xi = 0, \\ \\\Bigl(\ 1 + \xi s\ \Bigr)^\ \exp\! \Bigl(-\left(1 + \xi s \right)^ \Bigr) & ~~ \text ~~ \xi \neq 0 ~~ \text ~~ \xi\ s > -1\, \\ \\0 & ~~ \text \end

again valid for

s>-\tfrac{ 1 }{\xi}

in the case

\xi>0 ,

and for

s<-\tfrac{ 1 }{\xi}

in the case

\xi<0~.

The density is zero outside of the relevant range. In the case

\xi=0 

the density is positive on the whole real line.

Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely

\ Q(\ p;\ \mu,\ \sigma,\ \xi\) = \begin\mu - \sigma\ \ln\! \Bigl(-\ln(p)\ \Bigr) & ~ \text ~ \xi = 0 ~ \text ~ p \in (\ 0\,\ 1\)\, \\ \\\mu + \displaystyle \left(\Bigl(-\ln(p)\ \Bigr)^ - 1 \right) & ~ \text ~ \xi > 0 ~ \text ~ p \in [\ 0\,\ 1\)\, \\ {} & ~~ \text{ or } ~ \, \xi < 0 ~ \text{ and } ~ p \in \ (\ 0\,\ 1\ ]\ ; \end

and therefore the quantile density function,

q\equiv

dQ
dp

,

is

q(p;\sigma,\xi)=

\sigma
l(-ln(p)r)\xip

for~~p\in( 0 , 1 ),

valid for

\sigma>0 

and for any real

\xi~.

[6]

Summary statistics

Some simple statistics of the distribution are:

\operatorname{E}(X)=\mu+

(g
1-1)\sigma
\xi
for

\xi<1

\operatorname{Var}(X)=(g2-g

2)\sigma2
\xi2
1

,

\operatorname{Mode}(X)=\mu+

\sigma
\xi

[(1+\xi)-\xi-1].

The skewness is for ξ>0

\operatorname{skewness}(X)=

g2g1+2g
3
1
3-3g
(g
2)
1
3/2
2-g

For ξ < 0, the sign of the numerator is reversed.

The excess kurtosis is:

\operatorname{kurtosisexcess}(X)=

g3g1+6g2g
4
1
4-4g
(g
2)
1
2
2-g

-3~.

where

gk=\Gamma(1-k\xi),

k=1,2,3,4 ,

and

\Gamma(t)

is the gamma function.

Link to Fréchet, Weibull, and Gumbel families

The shape parameter

\xi

governs the tail behavior of the distribution. The sub-families defined by three cases:

\xi=0 ,

\xi>0 ,

and

\xi<0 ;

these correspond, respectively, to the Gumbel, Fréchet, and Weibull families, whose cumulative distribution functions are displayed below.

~\xi=0 ,

for all

x\inl( -infty, +inftyr):

F(x;\mu,\sigma, 0 )=\exp\left(-\exp\left(-

x-\mu
\sigma

\right)\right)~.

~\xi>0 ,

for all

x\in\left(\mu-\tfrac{\sigma}{\xi}, +infty\right):

Let

\alpha\equiv\tfrac{ 1 }{\xi}>0

and

y\equiv1+\tfrac{\xi}{\sigma}(x-\mu);

F(x;\mu,\sigma,\xi)=\begin{cases}0&y\leq0~or equiv.~x\leq\mu-\tfrac{\sigma}{\xi}\\exp\left(-

1
~y\alpha

\right)&y>0~or equiv.~x>\mu-\tfrac{\sigma}{\xi}~.\end{cases}

~\xi<0 ,

for all

x\in\left(-infty,\mu+\tfrac{\sigma}{|\xi|}\right):

Let

\alpha\equiv-\tfrac{1}{\xi}>0

and

y\equiv1-\tfrac{|\xi|}{\sigma}(x-\mu);

F(x;\mu,\sigma,\xi)=\begin{cases}\exp\left(-y\alpha\right)&y>0~or equiv.~x<\mu+\tfrac{\sigma}{|\xi|}\ 1&y\leq0~or equiv.~x\geq\mu+\tfrac{\sigma}{|\xi|}~.\end{cases}

The subsections below remark on properties of these distributions.

Modification for minima rather than maxima

The theory here relates to data maxima and the distribution being discussed is an extreme value distribution for maxima. A generalised extreme value distribution for data minima can be obtained, for example by substituting

 -x

for

x

in the distribution function, and subtracting the cumulative distribution from one: That is, replace

F(x)

with Doing so yields yet another family of distributions.

Alternative convention for the Weibull distribution

The ordinary Weibull distribution arises in reliability applications and is obtained from the distribution here by using the variable

t=\mu-x,

which gives a strictly positive support, in contrast to the use in the formulation of extreme value theory here. This arises because the ordinary Weibull distribution is used for cases that deal with data minima rather than data maxima. The distribution here has an addition parameter compared to the usual form of the Weibull distribution and, in addition, is reversed so that the distribution has an upper bound rather than a lower bound. Importantly, in applications of the GEV, the upper bound is unknown and so must be estimated, whereas when applying the ordinary Weibull distribution in reliability applications the lower bound is usually known to be zero.

Ranges of the distributions

Note the differences in the ranges of interest for the three extreme value distributions: Gumbel is unlimited, Fréchet has a lower limit, while the reversed Weibull has an upper limit.More precisely, Extreme Value Theory (Univariate Theory) describes which of the three is the limiting law according to the initial law and in particular depending on its tail.

Distribution of log variables

One can link the type I to types II and III in the following way: If the cumulative distribution function of some random variable

X

is of type II, and with the positive numbers as support, i.e.

F(x; 0,\sigma,\alpha),

then the cumulative distribution function of

lnX

is of type I, namely

F(x; ln\sigma,\tfrac{1}{\alpha}, 0 )~.

Similarly, if the cumulative distribution function of

X

is of type III, and with the negative numbers as support, i.e.

F(x; 0,\sigma, -\alpha),

then the cumulative distribution function of

 ln(-X)

is of type I, namely

F(x; -ln\sigma,\tfrac{ 1 }{\alpha}, 0 )~.

Link to logit models (logistic regression)

Multinomial logit models, and certain other types of logistic regression, can be phrased as latent variable models with error variables distributed as Gumbel distributions (type I generalized extreme value distributions). This phrasing is common in the theory of discrete choice models, which include logit models, probit models, and various extensions of them, and derives from the fact that the difference of two type-I GEV-distributed variables follows a logistic distribution, of which the logit function is the quantile function. The type-I GEV distribution thus plays the same role in these logit models as the normal distribution does in the corresponding probit models.

Properties

The cumulative distribution function of the generalized extreme value distribution solves the stability postulate equation. The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution.

Applications

Example for Normally distributed variables

Let

\left\{Xi| 1\lei\len\right\}

be i.i.d. normally distributed random variables with mean and variance .The Fisher–Tippett–Gnedenko theorem[10] tells us that

 max\{Xi| 1\lei\len\}\simGEV(\mun,\sigman,0),

where

\begin{align} \mun&=\Phi-1\left(1-

 1 
n

\right)\\ \sigman&=\Phi-1\left(1-

1
ne

\right)-\Phi-1\left(1-

 1 
n

\right)~. \end{align}

This allow us to estimate e.g. the mean of

 max\{Xi| 1\lei\len\}

from the mean of the GEV distribution:

\begin{align} \operatorname{E}\left\{ max\left\{ Xi| 1\lei\len\right\}\right\} &\mun+\gammaE\sigman\\ &=(1-\gammaE)\Phi-1\left(1-

 1 
n

\right)+\gammaE\Phi-1\left(1-

1
en

\right)\\ &=\sqrt{log\left(

n2
 2\pi log
\left(n2
2\pi
\right)

\right)~}  ⋅  \left(1+

\gamma
 logn

+l{o}\left(

1
 logn

\right)\right), \end{align}

where

\gammaE

is the Euler–Mascheroni constant.

Related distributions

  1. If

X\simrm{GEV}(\mu,\sigma,\xi) 

then

mX+b\simrm{GEV}(m\mu+b,m\sigma,\xi)

  1. If

X\simrm{Gumbel}(\mu, \sigma) 

(Gumbel distribution) then

X\simrm{GEV}(\mu,\sigma,0) 

  1. If

X\simrm{Weibull}(\sigma,\mu) 

(Weibull distribution) then

\mu\left(1-\sigmalog\tfrac{X}{\sigma}\right)\simrm{GEV}(\mu,\sigma,0) 

  1. If

X\simrm{GEV}(\mu,\sigma,0) 

then

\sigma\exp(-\tfrac{X-\mu}{\mu\sigma})\simrm{Weibull}(\sigma,\mu) 

(Weibull distribution)
  1. If

X\simrm{Exponential}(1) 

(Exponential distribution) then

\mu-\sigmalogX\simrm{GEV}(\mu,\sigma,0) 

  1. If

X\simGumbel(\alphaX,\beta)

and

Y\simGumbel(\alphaY,\beta)

then

X-Y\simLogistic(\alphaX-\alphaY,\beta)

(see Logistic distribution).
  1. If

X

and

Y\simGumbel(\alpha,\beta)

then

X+Y\nsimLogistic(2\alpha,\beta)

(The sum is not a logistic distribution).

Note that

\operatorname{E}\{ X+Y\}=2\alpha+2\beta\gamma2\alpha=\operatorname{E}\left\{ \operatorname{Logistic}(2\alpha,\beta)\right\}~.

Proofs

4. Let

X\simrm{Weibull}(\sigma,\mu),

then the cumulative distribution of

g(x)=\mu\left(1-\sigmalog

X
\sigma

\right)

is:
\begin{align} \operatorname{P}\left\{ \mu\left(1-\sigmalogX
\sigma

\right)<x\right\}&=

\operatorname{
P}\left\{ logX
\sigma

>

1-x/\mu
\sigma

\right\}\{}\\ & Since the logarithm is always increasing: \{}\\ &=\operatorname{P}\left\{ X>\sigma\exp\left[

1-x/\mu
\sigma

\right]\right\}\\ &=\exp\left(-\left(\cancel{\sigma}\exp\left[

1-x/\mu
\sigma

\right]\cancel{

1
\sigma
} \right)^\mu \right) \\&= \exp\left(- \left(\exp\left[\frac{\cancelto{\mu}{1} - x/\cancel{\mu}}{\sigma} \right] \right)^\cancel \right) \\&= \exp\left(- \exp\left[\frac{\mu - x}{\sigma} \right] \right) \\&= \exp\left(- \exp\left[- s \right] \right), \quad s = \frac\,\end

which is the cdf for

\simrm{GEV}(\mu,\sigma,0)~.

5. Let

X\simrm{Exponential}(1) ,

then the cumulative distribution of

g(X)=\mu-\sigmalogX

is:

\begin{align} \operatorname{P}\left\{ \mu-\sigmalogX<x\right\}&=\operatorname{P}\left\{ logX>

\mu-x
\sigma

\right\}\{}\\ & Since the logarithm is always increasing: \{}\\ &=\operatorname{P}\left\{ X>\exp\left(

\mu-x
\sigma

\right)\right\}\\ &=\exp\left[-\exp\left(

\mu-x
\sigma

\right)\right]\\ &=\exp\left[-\exp(-s)\right],~where~s\equiv

x-\mu
\sigma

; \end{align}

which is the cumulative distribution of

\operatorname{GEV}(\mu,\sigma,0)~.

See also

Further reading

Notes and References

  1. Web site: Weisstein. Eric W.. Extreme Value Distribution. 2021-08-06. mathworld.wolfram.com. en.
  2. Book: Haan . Laurens . Ferreira . Ana . Extreme value theory: an introduction . 2007 . Springer.
  3. Jenkinson, Arthur F. The frequency distribution of the annual maximum (or minimum) values of meteorological elements. Quarterly Journal of the Royal Meteorological Society. 158–171. 1955. 81. 348. 10.1002/qj.49708134804. 1955QJRMS..81..158J .
  4. Book: Haan, Laurens. Ferreira, Ana. Extreme value theory: an introduction. Springer. 2007.
  5. von Mises, R.. La distribution de la plus grande de n valeurs. Rev. Math. Union Interbalcanique 1. 1936. 141–160.
  6. Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation . 10.1007/s10479-019-03373-1 . 2021 . Norton . Matthew . Khokhlov . Valentyn . Uryasev . Stan . Annals of Operations Research . 299 . 1–2 . 1281–1315 . 1811.11301 .
  7. http://www.unalmed.edu.co/~ndgirald/Archivos%20Lectura/Archivos%20curso%20Riesgo%20Operativo/moscadelli%202004.pdf Moscadelli, Marco. "The modelling of operational risk: experience with the analysis of the data collected by the Basel Committee." Available at SSRN 557214 (2004).
  8. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.523.6456&rep=rep1&type=pdf Kjersti Aas, lecture, NTNU, Trondheim, 23 Jan 2008
  9. Liu . Xin . Wang . Yu . 2022 . Quantifying annual occurrence probability of rainfall-induced landslide at a specific slope . Computers and Geotechnics . en . 149 . 104877 . 10.1016/j.compgeo.2022.104877. 2022CGeot.14904877L . 250232752 .
  10. Book: David . Herbert A. . Order statistics . Nagaraja . Haikady N. . John Wiley & Sons . 2004 . 299.