Generalized logistic distribution explained

The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al.[1] list four forms, which are listed below.

Type I has also been called the skew-logistic distribution. Type IV subsumes the other types and is obtained when applying the logit transform to beta random variates. Following the same convention as for the log-normal distribution, type IV may be referred to as the logistic-beta distribution, with reference to the standard logistic function, which is the inverse of the logit transform.

For other families of distributions that have also been called generalized logistic distributions, see the shifted log-logistic distribution, which is a generalization of the log-logistic distribution; and the metalog ("meta-logistic") distribution, which is highly shape-and-bounds flexible and can be fit to data with linear least squares.

Definitions

The following definitions are for standardized versions of the families, which can be expanded to the full form as a location-scale family. Each is defined using either the cumulative distribution function (F) or the probability density function (ƒ), and is defined on (-∞,∞).

Type I

F(x;\alpha)=1
(1+e-x)\alpha

\equiv(1+e-x)-\alpha,\alpha>0.

The corresponding probability density function is:
f(x;\alpha)=\alphae-x
\left(1+e-x\right)\alpha+1

,\alpha>0.

This type has also been called the "skew-logistic" distribution.

Type II

F(x;\alpha)=1-e-\alpha
(1+e-x)\alpha

,\alpha>0.

The corresponding probability density function is:
f(x;\alpha)=\alphae-\alpha
(1+e-x)\alpha+1

,\alpha>0.

Type III

f(x;\alpha)=1
B(\alpha,\alpha)
e-\alpha
(1+e-x)2\alpha

,\alpha>0.

Here B is the beta function. The moment generating function for this type is
M(t)=\Gamma(\alpha-t)\Gamma(\alpha+t)
(\Gamma(\alpha))2

,-\alpha<t<\alpha.

The corresponding cumulative distribution function is:

F(x;\alpha)=

\left(ex+1\right)\Gamma(\alpha)e\alpha\left(e-x+1\right)-2 2\tilde{F
1\left(1,1-\alpha

;\alpha+1;-ex\right)}{B(\alpha,\alpha)},\alpha>0.

Type IV

\begin{align} f(x;\alpha,\beta)&=1
B(\alpha,\beta)
e-\beta
(1+e-x)\alpha+\beta

,\alpha,\beta>0\\[4pt] &=

\sigma(x)\alpha\sigma(-x)\beta
B(\alpha,\beta)

. \end{align}

Where, B is the beta function and

\sigma(x)=1/(1+e-x)

is the standard logistic function. The moment generating function for this type is
M(t)=\Gamma(\beta-t)\Gamma(\alpha+t)
\Gamma(\alpha)\Gamma(\beta)

,-\alpha<t<\beta.

This type is also called the "exponential generalized beta of the second type".[1]

The corresponding cumulative distribution function is:

F(x;\alpha,\beta)=

\left(ex+1\right)\Gamma(\alpha)e\beta\left(e-x+1\right)-\alpha2\tilde{F
1\left(1,1-\beta

;\alpha+1;-ex\right)}{B(\alpha,\beta)},\alpha,\beta>0.

Relationship between types

Type IV is the most general form of the distribution. The Type III distribution can be obtained from Type IV by fixing

\beta=\alpha

. The Type II distribution can be obtained from Type IV by fixing

\alpha=1

(and renaming

\beta

to

\alpha

). The Type I distribution can be obtained from Type IV by fixing

\beta=1

. Fixing

\alpha=\beta=1

gives the standard logistic distribution.

Type IV (logistic-beta) properties

The Type IV generalized logistic, or logistic-beta distribution, with support

x\inR

and shape parameters

\alpha,\beta>0

, has (as shown above) the probability density function (pdf):

f(x;\alpha,\beta)=

1
B(\alpha,\beta)
e-\beta
(1+e-x)\alpha+\beta

=

\sigma(x)\alpha\sigma(-x)\beta
B(\alpha,\beta)

,

where

\sigma(x)=1/(1+e-x)

is the standard logistic function. The probability density functions for three different sets of shape parameters are shown in the plot, where the distributions have been scaled and shifted to give zero means and unity variances, in order to facilitate comparison of the shapes.

In what follows, the notation

B\sigma(\alpha,\beta)

is used to denote the Type IV distribution.

Relationship with Gamma Distribution

This distribution can be obtained in terms of the gamma distribution as follows. Let

y\simGamma(\alpha,\gamma)

and independently,

z\simGamma(\beta,\gamma)

and let

x=lny-lnz

. Then

x\simB\sigma(\alpha,\beta)

.[2]

Symmetry

If

x\simB\sigma(\alpha,\beta)

, then

-x\simB\sigma(\beta,\alpha)

.

Mean and variance

By using the logarithmic expectations of the gamma distribution, the mean and variance can be derived as:

\begin{align} E[x]&=\psi(\alpha)-\psi(\beta)\\ var[x]&=\psi'(\alpha)+\psi'(\beta)\\ \end{align}

where

\psi

is the digamma function, while

\psi'=\psi(1)

is its first derivative, also known as the trigamma function, or the first polygamma function. Since

\psi

is strictly increasing, the sign of the mean is the same as the sign of

\alpha-\beta

. Since

\psi'

is strictly decreasing, the shape parameters can also be interpreted as concentration parameters. Indeed, as shown below, the left and right tails respectively become thinner as

\alpha

or

\beta

are increased. The two terms of the variance represent the contributions to the variance of the left and right parts of the distribution.

Cumulants and skewness

The cumulant generating function is

K(t)=lnM(t)

, where the moment generating function

M(t)

is given above. The cumulants,

\kappan

, are the

n

-th derivatives of

K(t)

, evaluated at

t=0

:

\kappan=K(n)(0)=\psi(n-1)(\alpha)+(-1)n\psi(n-1)(\beta)

where

\psi(0)=\psi

and

\psi(n-1)

are the digamma and polygamma functions. In agreement with the derivation above, the first cumulant,

\kappa1

, is the mean and the second,

\kappa2

, is the variance.

The third cumulant,

\kappa3

, is the third central moment

E[(x-E[x])3]

, which when scaled by the third power of the standard deviation gives the skewness:

skew[x]=

\psi(2)(\alpha)-\psi(2)(\beta)
\sqrt{var[x]

3}

The sign (and therefore the handedness) of the skewness is the same as the sign of

\alpha-\beta

.

Mode

The mode (pdf maximum) can be derived by finding

x

where the log pdf derivative is zero:
d
dx

lnf(x;\alpha,\beta)=\alpha\sigma(-x)-\beta\sigma(x)=0

This simplifies to

\alpha/\beta=ex

, so that:

mode[x]=ln

\alpha
\beta

Tail behaviour

In each of the left and right tails, one of the sigmoids in the pdf saturates to one, so that the tail is formed by the other sigmoid. For large negative

x

, the left tail of the pdf is proportional to

\sigma(x)\alphae\alpha

, while the right tail (large positive

x

) is proportional to

\sigma(-x)\betae-\beta

. This means the tails are independently controlled by

\alpha

and

\beta

. Although type IV tails are heavier than those of the normal distribution (
-x2
2v
e
, for variance

v

), the type IV means and variances remain finite for all

\alpha,\beta>0

. This is in contrast with the Cauchy distribution for which the mean and variance do not exist. In the log pdf plots shown here, the type IV tails are linear, the normal distribution tails are quadratic and the Cauchy tails are logarithmic.

Exponential family properties

B\sigma(\alpha,\beta)

forms an exponential family with natural parameters

\alpha

and

\beta

and sufficient statistics

log\sigma(x)

and

log\sigma(-x)

. The expected values of the sufficient statistics can be found by differentiation of the log-normalizer:[3]

\begin{align} E[log\sigma(x)]&=

\partiallogB(\alpha,\beta)
\partial\alpha

=\psi(\alpha)-\psi(\alpha+\beta)\\ E[log\sigma(-x)]&=

\partiallogB(\alpha,\beta)
\partial\beta

=\psi(\beta)-\psi(\alpha+\beta)\\ \end{align}

Given a data set

x1,\ldots,xn

assumed to have been generated IID from

B\sigma(\alpha,\beta)

, the maximum-likelihood parameter estimate is:

\begin{align} \hat\alpha,\hat\beta=\argmax\alpha,\beta&

1n\sum
i=1

nlogf(xi;\alpha,\beta)\\ =\argmax\alpha,\beta&\alphal(

1n\sum
ilog\sigma(x

i)r) +\betal(

1n\sum
ilog\sigma(-x

i)r) -logB(\alpha,\beta)\\ =\argmax\alpha,\beta&\alpha\overline{log\sigma(x)} +\beta\overline{log\sigma(-x)}-logB(\alpha,\beta) \end{align}

where the overlines denote the averages of the sufficient statistics. The maximum-likelihood estimate depends on the data only via these average statistics. Indeed, at the maximum-likelihood estimate the expected values and averages agree:

\begin{align} \psi(\hat\alpha)-\psi(\hat\alpha+\hat\beta)&=\overline{log\sigma(x)}\\ \psi(\hat\beta)-\psi(\hat\alpha+\hat\beta)&=\overline{log\sigma(-x)}\\ \end{align}

which is also where the partial derivatives of the above maximand vanish.

Relationships with other distributions

Relationships with other distributions include:

y\simBetaPrime(\alpha,\beta)

, then

x=lny

has a type IV distribution, with parameters

\alpha

and

\beta

. See beta prime distribution.

z\simGamma(\beta,1)

and

y\midz\simGamma(\alpha,z)

, where

z

is used as the rate parameter of the second gamma distribution, then

y

has a compound gamma distribution, which is the same as

BetaPrime(\alpha,\beta)

, so that

x=lny

has a type IV distribution.

p\simBeta(\alpha,\beta)

, then

x=logitp

has a type IV distribution, with parameters

\alpha

and

\beta

. See beta distribution. The logit function,

logit(p)=log

p
1-p
is the inverse of the logistic function. This relationship explains the name logistic-beta for this distribution: if the logistic function is applied to logistic-beta variates, the transformed distribution is beta.

Large shape parameters

For large values of the shape parameters,

\alpha,\beta\gg1

, the distribution becomes more Gaussian, with:
\begin{align} E[x]& ≈ ln\alpha
\beta

\\ var[x]&

\alpha+\beta
\alpha\beta

\end{align}

This is demonstrated in the pdf and log pdf plots here.

Random variate generation

Since random sampling from the gamma and beta distributions are readily available on many software platforms, the above relationships with those distributions can be used to generate variates from the type IV distribution.

Generalization with location and scale parameters

A flexible, four-parameter family can be obtained by adding location and scale parameters. One way to do this is if

x\simB\sigma(\alpha,\beta)

, then let

y=kx+\delta

, where

k>0

is the scale parameter and

\delta\inR

is the location parameter. The four-parameter family obtained thus has the desired additional flexibility, but the new parameters may be hard to interpret because

\delta\neE[y]

and

k2\nevar[y]

. Moreover maximum-likelihood estimation with this parametrization is hard. These problems can be addressed as follows.

Recall that the mean and variance of

x

are:

\begin{align} \tilde\mu&=\psi(\alpha)-\psi(\beta),&\tildes2&=\psi'(\alpha)+\psi'(\beta) \end{align}

Now expand the family with location parameter

\mu\inR

and scale parameter

s>0

, via the transformation:

\begin{align} y&=\mu+

s
\tildes

(x-\tilde\mu)\iff x=\tilde\mu+

\tildes
s

(y-\mu) \end{align}

so that

\mu=E[y]

and

s2=var[y]

are now interpretable. It may be noted that allowing

s

to be either positive or negative does not generalize this family, because of the above-noted symmetry property. We adopt the notation

y\sim\bar

2)
B
\sigma(\alpha,\beta,\mu,s
for this family.

If the pdf for

x\simB\sigma(\alpha,\beta)

is

f(x;\alpha,\beta)

, then the pdf for

y\sim\bar

2)
B
\sigma(\alpha,\beta,\mu,s
is:

\barf(y;\alpha,\beta,\mu,s2)=

\tildes
s

f(x;\alpha,\beta)

where it is understood that

x

is computed as detailed above, as a function of

y,\alpha,\beta,\mu,s

. The pdf and log-pdf plots above, where the captions contain (means=0, variances=1), are for

\barB\sigma(\alpha,\beta,0,1)

.

Maximum likelihood parameter estimation

In this section, maximum-likelihood estimation of the distribution parameters, given a dataset

x1,\ldots,xn

is discussed in turn for the families

B\sigma(\alpha,\beta)

and

\bar

2)
B
\sigma(\alpha,\beta,\mu,s
.

Maximum likelihood for standard Type IV

As noted above,

B\sigma(\alpha,\beta)

is an exponential family with natural parameters

\alpha,\beta

, the maximum-likelihood estimates of which depend only on averaged sufficient statistics:
\begin{align} \overline{log\sigma(x)}&=1n\sum
ilog\sigma(x

i)&&and& \overline{log\sigma(-x)}&=

1n\sum
ilog\sigma(-x

i)\end{align}

Once these statistics have been accumulated, the maximum-likelihood estimate is given by:

\begin{align} \hat\alpha,\hat\beta=\argmax\alpha,\beta>0&\alpha\overline{log\sigma(x)} +\beta\overline{log\sigma(-x)}-logB(\alpha,\beta) \end{align}

By using the parametrization

\theta1=log\alpha

and

\theta2=log\beta

an unconstrained numerical optimization algorithm like BFGS can be used. Optimization iterations are fast, because they are independent of the size of the data-set.

An alternative is to use an EM-algorithm based on the composition:

x-log(\gamma\delta)\simB\sigma(\alpha,\beta)

if

z\simGamma(\beta,\gamma)

and

ex\midz\simGamma(\alpha,z/\delta)

. Because of the self-conjugacy of the gamma distribution, the posterior expectations,

\left\langlez\right\rangleP(z\mid

and

\left\langlelogz\right\rangleP(z\mid

that are required for the E-step can be computed in closed form. The M-step parameter update can be solved analogously to maximum-likelihood for the gamma distribution.

Maximum likelihood for the four-parameter family

The maximum-likelihood problem for

\bar

2)
B
\sigma(\alpha,\beta,\mu,s
, having pdf

\barf

is:

\hat\alpha,\hat\beta,\hat\mu,\hats=\argmax\alpha,\beta,\mu,slog

1n\sum
i

\bar

2)
f(x
i;\alpha,\beta,\mu,s
This is no longer an exponential family, so that each optimization iteration has to traverse the whole data-set. Moreover the computation of the partial derivatives (as required for example by BFGS) is considerably more complex than for the above two-parameter case. However, all the component functions are readily available in software packages with automatic differentiation. Again, the positive parameters can be parametrized in terms of their logarithms to obtain an unconstrained numerical optimization problem.

For this problem, numerical optimization may fail unless the initial location and scale parameters are chosen appropriately. However the above-mentioned interpretability of these parameters in the parametrization of

\barB\sigma

can be used to do this. Specifically, the initial values for

\mu

and

s2

can be set to the empirical mean and variance of the data.

See also

References

  1. Johnson, N.L., Kotz, S., Balakrishnan, N. (1995) Continuous Univariate Distributions, Volume 2, Wiley. (pages 140–142)
  2. Leigh J. Halliwell . The Log-Gamma Distribution and Non-Normal Error . 2018 . 173176687 .
  3. C.M.Bishop, Pattern Recognition and Machine Learning, Springer 2006.