The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al.[1] list four forms, which are listed below.
Type I has also been called the skew-logistic distribution. Type IV subsumes the other types and is obtained when applying the logit transform to beta random variates. Following the same convention as for the log-normal distribution, type IV may be referred to as the logistic-beta distribution, with reference to the standard logistic function, which is the inverse of the logit transform.
For other families of distributions that have also been called generalized logistic distributions, see the shifted log-logistic distribution, which is a generalization of the log-logistic distribution; and the metalog ("meta-logistic") distribution, which is highly shape-and-bounds flexible and can be fit to data with linear least squares.
The following definitions are for standardized versions of the families, which can be expanded to the full form as a location-scale family. Each is defined using either the cumulative distribution function (F) or the probability density function (ƒ), and is defined on (-∞,∞).
F(x;\alpha)= | 1 |
(1+e-x)\alpha |
\equiv(1+e-x)-\alpha, \alpha>0.
f(x;\alpha)= | \alphae-x |
\left(1+e-x\right)\alpha+1 |
, \alpha>0.
F(x;\alpha)=1- | e-\alpha |
(1+e-x)\alpha |
, \alpha>0.
f(x;\alpha)= | \alphae-\alpha |
(1+e-x)\alpha+1 |
, \alpha>0.
f(x;\alpha)= | 1 |
B(\alpha,\alpha) |
e-\alpha | |
(1+e-x)2\alpha |
, \alpha>0.
M(t)= | \Gamma(\alpha-t)\Gamma(\alpha+t) |
(\Gamma(\alpha))2 |
, -\alpha<t<\alpha.
F(x;\alpha)=
\left(ex+1\right)\Gamma(\alpha)e\alpha\left(e-x+1\right)-2 2\tilde{F | |
1\left(1,1-\alpha |
;\alpha+1;-ex\right)}{B(\alpha,\alpha)}, \alpha>0.
\begin{align} f(x;\alpha,\beta)&= | 1 |
B(\alpha,\beta) |
e-\beta | |
(1+e-x)\alpha+\beta |
, \alpha,\beta>0\\[4pt] &=
\sigma(x)\alpha\sigma(-x)\beta | |
B(\alpha,\beta) |
. \end{align}
\sigma(x)=1/(1+e-x)
M(t)= | \Gamma(\beta-t)\Gamma(\alpha+t) |
\Gamma(\alpha)\Gamma(\beta) |
, -\alpha<t<\beta.
The corresponding cumulative distribution function is:
F(x;\alpha,\beta)=
\left(ex+1\right)\Gamma(\alpha)e\beta\left(e-x+1\right)-\alpha2\tilde{F | |
1\left(1,1-\beta |
;\alpha+1;-ex\right)}{B(\alpha,\beta)}, \alpha,\beta>0.
Type IV is the most general form of the distribution. The Type III distribution can be obtained from Type IV by fixing
\beta=\alpha
\alpha=1
\beta
\alpha
\beta=1
\alpha=\beta=1
The Type IV generalized logistic, or logistic-beta distribution, with support
x\inR
\alpha,\beta>0
f(x;\alpha,\beta)=
1 | |
B(\alpha,\beta) |
e-\beta | |
(1+e-x)\alpha+\beta |
=
\sigma(x)\alpha\sigma(-x)\beta | |
B(\alpha,\beta) |
,
where
\sigma(x)=1/(1+e-x)
In what follows, the notation
B\sigma(\alpha,\beta)
This distribution can be obtained in terms of the gamma distribution as follows. Let
y\simGamma(\alpha,\gamma)
z\simGamma(\beta,\gamma)
x=lny-lnz
x\simB\sigma(\alpha,\beta)
If
x\simB\sigma(\alpha,\beta)
-x\simB\sigma(\beta,\alpha)
By using the logarithmic expectations of the gamma distribution, the mean and variance can be derived as:
\begin{align} E[x]&=\psi(\alpha)-\psi(\beta)\\ var[x]&=\psi'(\alpha)+\psi'(\beta)\\ \end{align}
\psi
\psi'=\psi(1)
\psi
\alpha-\beta
\psi'
\alpha
\beta
The cumulant generating function is
K(t)=lnM(t)
M(t)
\kappan
n
K(t)
t=0
\kappan=K(n)(0)=\psi(n-1)(\alpha)+(-1)n\psi(n-1)(\beta)
\psi(0)=\psi
\psi(n-1)
\kappa1
\kappa2
The third cumulant,
\kappa3
E[(x-E[x])3]
skew[x]=
\psi(2)(\alpha)-\psi(2)(\beta) | |
\sqrt{var[x] |
3}
\alpha-\beta
The mode (pdf maximum) can be derived by finding
x
d | |
dx |
lnf(x;\alpha,\beta)=\alpha\sigma(-x)-\beta\sigma(x)=0
\alpha/\beta=ex
mode[x]=ln
\alpha | |
\beta |
In each of the left and right tails, one of the sigmoids in the pdf saturates to one, so that the tail is formed by the other sigmoid. For large negative
x
\sigma(x)\alpha ≈ e\alpha
x
\sigma(-x)\beta ≈ e-\beta
\alpha
\beta
| ||||
e |
v
\alpha,\beta>0
B\sigma(\alpha,\beta)
\alpha
\beta
log\sigma(x)
log\sigma(-x)
\begin{align} E[log\sigma(x)]&=
\partiallogB(\alpha,\beta) | |
\partial\alpha |
=\psi(\alpha)-\psi(\alpha+\beta)\\ E[log\sigma(-x)]&=
\partiallogB(\alpha,\beta) | |
\partial\beta |
=\psi(\beta)-\psi(\alpha+\beta)\\ \end{align}
x1,\ldots,xn
B\sigma(\alpha,\beta)
\begin{align} \hat\alpha,\hat\beta=\argmax\alpha,\beta&
1n\sum | |
i=1 |
nlogf(xi;\alpha,\beta)\\ =\argmax\alpha,\beta& \alphal(
1n\sum | |
ilog\sigma(x |
i)r) +\betal(
1n\sum | |
ilog\sigma(-x |
i)r) -logB(\alpha,\beta)\\ =\argmax\alpha,\beta& \alpha\overline{log\sigma(x)} +\beta\overline{log\sigma(-x)}-logB(\alpha,\beta) \end{align}
\begin{align} \psi(\hat\alpha)-\psi(\hat\alpha+\hat\beta)&=\overline{log\sigma(x)}\\ \psi(\hat\beta)-\psi(\hat\alpha+\hat\beta)&=\overline{log\sigma(-x)}\\ \end{align}
Relationships with other distributions include:
y\simBetaPrime(\alpha,\beta)
x=lny
\alpha
\beta
z\simGamma(\beta,1)
y\midz\simGamma(\alpha,z)
z
y
BetaPrime(\alpha,\beta)
x=lny
p\simBeta(\alpha,\beta)
x=logitp
\alpha
\beta
logit(p)=log
p | |
1-p |
For large values of the shape parameters,
\alpha,\beta\gg1
\begin{align} E[x]& ≈ ln | \alpha |
\beta |
\\ var[x]& ≈
\alpha+\beta | |
\alpha\beta |
\end{align}
Since random sampling from the gamma and beta distributions are readily available on many software platforms, the above relationships with those distributions can be used to generate variates from the type IV distribution.
A flexible, four-parameter family can be obtained by adding location and scale parameters. One way to do this is if
x\simB\sigma(\alpha,\beta)
y=kx+\delta
k>0
\delta\inR
\delta\neE[y]
k2\nevar[y]
Recall that the mean and variance of
x
\begin{align} \tilde\mu&=\psi(\alpha)-\psi(\beta),&\tildes2&=\psi'(\alpha)+\psi'(\beta) \end{align}
\mu\inR
s>0
\begin{align} y&=\mu+
s | |
\tildes |
(x-\tilde\mu)\iff x=\tilde\mu+
\tildes | |
s |
(y-\mu) \end{align}
\mu=E[y]
s2=var[y]
s
y\sim\bar
2) | |
B | |
\sigma(\alpha,\beta,\mu,s |
If the pdf for
x\simB\sigma(\alpha,\beta)
f(x;\alpha,\beta)
y\sim\bar
2) | |
B | |
\sigma(\alpha,\beta,\mu,s |
\barf(y;\alpha,\beta,\mu,s2)=
\tildes | |
s |
f(x;\alpha,\beta)
x
y,\alpha,\beta,\mu,s
\barB\sigma(\alpha,\beta,0,1)
In this section, maximum-likelihood estimation of the distribution parameters, given a dataset
x1,\ldots,xn
B\sigma(\alpha,\beta)
\bar
2) | |
B | |
\sigma(\alpha,\beta,\mu,s |
As noted above,
B\sigma(\alpha,\beta)
\alpha,\beta
\begin{align} \overline{log\sigma(x)}&= | 1n\sum |
ilog\sigma(x |
i)&&and& \overline{log\sigma(-x)}&=
1n\sum | |
ilog\sigma(-x |
i)\end{align}
\begin{align} \hat\alpha,\hat\beta=\argmax\alpha,\beta>0& \alpha\overline{log\sigma(x)} +\beta\overline{log\sigma(-x)}-logB(\alpha,\beta) \end{align}
\theta1=log\alpha
\theta2=log\beta
An alternative is to use an EM-algorithm based on the composition:
x-log(\gamma\delta)\simB\sigma(\alpha,\beta)
z\simGamma(\beta,\gamma)
ex\midz\simGamma(\alpha,z/\delta)
\left\langlez\right\rangleP(z\mid
\left\langlelogz\right\rangleP(z\mid
The maximum-likelihood problem for
\bar
2) | |
B | |
\sigma(\alpha,\beta,\mu,s |
\barf
\hat\alpha,\hat\beta,\hat\mu,\hats=\argmax\alpha,\beta,\mu,slog
1n\sum | |
i |
\bar
2) | |
f(x | |
i;\alpha,\beta,\mu,s |
For this problem, numerical optimization may fail unless the initial location and scale parameters are chosen appropriately. However the above-mentioned interpretability of these parameters in the parametrization of
\barB\sigma
\mu
s2