Folded normal distribution explained

The folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable X with mean μ and variance σ2, the random variable Y = |X| has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of x = 0 is folded over by taking the absolute value. In the physics of heat conduction, the folded normal distribution is a fundamental solution of the heat equation on the half space; it corresponds to having a perfect insulator on a hyperplane through the origin.

Definitions

Density

The probability density function (PDF) is given by

2)= 1
\sqrt{2\pi\sigma2
f
Y(x;\mu,\sigma
} \, e^+ \frac \, e^

for x ≥ 0, and 0 everywhere else. An alternative formulation is given by

f\left(x\right)=\sqrt{

2
\pi\sigma2
}e^\cosh,

where cosh is the Hyperbolic cosine function. It follows that the cumulative distribution function (CDF) is given by:

FY(x;\mu,\sigma2)=

1
2

\left[erf\left(

x+\mu
\sqrt{2\sigma2
}\right) + \mbox\left(\frac\right)\right]

for x ≥ 0, where erf is the error function. This expression reduces to the CDF of the half-normal distribution when μ = 0.

The mean of the folded distribution is then

\muY=\sigma\sqrt{

2
\pi
} \,\, \exp\left(\frac\right) + \mu \, \mbox\left(\frac\right)

or

\muY=\sqrt{

2
\pi
}\sigma e^+\mu\left[1-2\Phi\left(-\frac{\mu}{\sigma}\right) \right]

where

\Phi

is the normal cumulative distribution function:

\Phi(x)= 

12\left[1\operatorname{erf}\left(
+
x
\sqrt{2
}\right)\right].

The variance then is expressed easily in terms of the mean:

2
\sigma
Y

=\mu2+\sigma2-

2.
\mu
Y

Both the mean (μ) and variance (σ2) of X in the original normal distribution can be interpreted as the location and scale parameters of Y in the folded distribution.

Properties

Mode

The mode of the distribution is the value of

x

for which the density is maximised. In order to find this value, we take the first derivative of the density with respect to

x

and set it equal to zero. Unfortunately, there is no closed form. We can, however, write the derivative in a better way and end up with a non-linear equation
df(x)
dx

=0 ⇒ -

\left(x-\mu\right)
\sigma2
-1
\left(x-\mu\right)2
\sigma2
2
e-
\left(x+\mu\right)
\sigma2
-1
\left(x+\mu\right)2
\sigma2
2
e

=0

-1
\left(x-\mu\right)2
\sigma2
2
x\left[e
-1
\left(x+\mu\right)2
\sigma2
2
+e

\right]- \mu

-1
\left(x-\mu\right)2
\sigma2
2
\left[e
-1
\left(x+\mu\right)2
\sigma2
2
-e

\right]=0

-2\mux
\sigma2
x\left(1+e
-2\mux
\sigma2
\right)-\mu\left(1-e

\right)=0

-2\mux
\sigma2
\left(\mu+x\right)e

=\mu-x

x=-\sigma2log{
2\mu
\mu-x
\mu+x
} .

Tsagris et al. (2014) saw from numerical investigation that when

\mu<\sigma

, the maximum is met when

x=0

, and when

\mu

becomes greater than

3\sigma

, the maximum approaches

\mu

. This is of course something to be expected, since, in this case, the folded normal converges to the normal distribution. In order to avoid any trouble with negative variances, the exponentiation of the parameter is suggested. Alternatively, you can add a constraint, such as if the optimiser goes for a negative variance the value of the log-likelihood is NA or something very small.

Characteristic function and other related functions

-\sigma2t2+i\mut
2
\varphi\Phi\left(
x\left(t\right)=e
\mu
\sigma

+i\sigmat\right)

-\sigma2t2-i\mut
2
+ e\Phi\left(-
\mu
\sigma

+i\sigmat\right)

.

Mx\left(t\right)=\varphi

\sigma2t2+\mut
2
\Phi\left(
x\left(-it\right)=e
\mu
\sigma

+\sigmat\right)

\sigma2t2-\mut
2
+ e\Phi\left(-
\mu
\sigma

+\sigmat\right)

.

Kx\left(t\right)=log{M

x\left(t\right)}= \left(\sigma2t2
2

+\mut\right)+log{\left\lbrace1-\Phi\left(-

\mu
\sigma

-\sigmat\right)

\sigma2t2-\mut
2
+ e\left[1-\Phi\left(
\mu
\sigma

-\sigmat\right)\right]\right\rbrace}

.

E\left(e-tx

\sigma2t2-\mut
2
\right)=e\left[1-\Phi\left(-
\mu
\sigma

+\sigmat\right)

\sigma2t2+\mut
2
\right]+ e\left[1-\Phi\left(
\mu
\sigma

+\sigmat\right)\right]

.

\hat{f}\left(t\right)=\varphix\left(-2\pit\right)=

-4\pi2\sigma2t2-i2\pi\mut
2
e\left[1-\Phi\left(-
\mu
\sigma

-i2\pi\sigmat\right)\right]+

-4\pi2\sigma2t2+i2\pi\mut
2
e\left[1-\Phi\left(
\mu
\sigma

-i2\pi\sigmat\right)\right]

.

Related distributions

(0,infty)

is given as

f(x)=

\alpha
2
2\betax\alpha-1\exp(-\betax2+\gammax)
\Psi{\left(\alpha,
\gamma
\sqrt{\beta
2

\right)}}

, where

\Psi(\alpha,z)={}1\Psi

1\left(\begin{matrix}\left(\alpha,1
2

\right)\\(1,0)\end{matrix};z\right)

denotes the Fox–Wright Psi function.

Statistical Inference

Estimation of parameters

There are a few ways of estimating the parameters of the folded normal. All of them are essentially the maximum likelihood estimation procedure, but in some cases, a numerical maximization is performed, whereas in other cases, the root of an equation is being searched. The log-likelihood of the folded normal when a sample

xi

of size

n

is available can be written in the following way

l=-

n
2
nlog{\left[e
log{2\pi\sigma
i=1
-
2
\left(x
i-\mu\right)
2\sigma2
-
2
\left(x
i+\mu\right)
2\sigma2
+ e

\right]}

l=-

n
2
nlog{\left[e
log{2\pi\sigma
i=1
-
2
\left(x
i-\mu\right)
2\sigma2
-
2
\left(x
i+\mu\right)
2\sigma2
\left(1+e
2
\left(x
i-\mu\right)
2\sigma2
e

\right)\right]}

l=-

n
2
n
2
\left(x
i-\mu\right)
2\sigma2
log{2\pi\sigma
i=1
nlog{\left(1+e
+\sum
i=1
-2\muxi
\sigma2

\right)}

In R (programming language), using the package Rfast one can obtain the MLE really fast (command foldnorm.mle). Alternatively, the command optim or nlm will fit this distribution. The maximisation is easy, since two parameters (

\mu

and

\sigma2

) are involved. Note, that both positive and negative values for

\mu

are acceptable, since

\mu

belongs to the real line of numbers, hence, the sign is not important because the distribution is symmetric with respect to it. The next code is written in Rfolded <- function(y) The partial derivatives of the log-likelihood are written as
\partiall
\partial\mu

=

n\left(x
\sum\right)
i-\mu
-
\sigma2
2
\sigma2
n
-2\muxi
\sigma2
x
ie
-2\muxi
\sigma2
1+e
\sum
i=1

\partiall
\partial\mu

=

n\left(x
\sum\right)
i-\mu
-
\sigma2
2
\sigma2
nxi
2\muxi
\sigma2
1+e
\sum
i=1

  and

\partiall
\partial\sigma2

=-

n+
2\sigma2
n\left(x
\sum\right)2
i-\mu
+
2\sigma4
2\mu
\sigma4
n
-2\muxi
\sigma2
x
ie
-2\muxi
\sigma2
1+e
\sum
i=1

\partiall
\partial\sigma2

=-

n+
2\sigma2
n\left(x
\sum\right)2
i-\mu
+
2\sigma4
2\mu
\sigma4
nxi
2\muxi
\sigma2
1+e
\sum
i=1
.

By equating the first partial derivative of the log-likelihood to zero, we obtain a nice relationship

nxi
2\muxi
\sigma2
1+e
\sum=
i=1
n\left(x
\sum\right)
i-\mu
2

.

Note that the above equation has three solutions, one at zero and two more with the opposite sign. By substituting the above equation, to the partial derivative of the log-likelihood w.r.t

\sigma2

and equating it to zero, we get the following expression for the variance
2=
2
\sum
i-\mu\right)
n
\sigma+
n\left(x
2\mu\sum
i-\mu\right)
=
n
2-\mu
\sum2\right)
i
=
n
2
\sum
i
n

-\mu2

,

which is the same formula as in the normal distribution. A main difference here is that

\mu

and

\sigma2

are not statistically independent. The above relationships can be used to obtain maximum likelihood estimates in an efficient recursive way. We start with an initial value for

\sigma2

and find the positive root (

\mu

) of the last equation. Then, we get an updated value of

\sigma2

. The procedure is being repeated until the change in the log-likelihood value is negligible. Another easier and more efficient way is to perform a search algorithm. Let us write the last equation in a more elegant way
nxi
2\muxi
\sigma2
1+e
2\sum
i=1
n
2\muxi
\sigma2
x\right)
i\left(1+e
2\muxi
\sigma2
1+e
- \sum
i=1

+n\mu=0

n
2\muxi
\sigma2
x\right)
i\left(1-e
2\muxi
\sigma2
1+e
\sum
i=1

+n\mu=0

.

It becomes clear that the optimization the log-likelihood with respect to the two parameters has turned into a root search of a function. This of course is identical to the previous root search. Tsagris et al. (2014) spotted that there are three roots to this equation for

\mu

, i.e. there are three possible values of

\mu

that satisfy this equation. The

-\mu

and

+\mu

, which are the maximum likelihood estimates and 0, which corresponds to the minimum log-likelihood.

See also

(0,infty)

is given as

f(x)=

\alpha
2
2\betax\alpha-1\exp(-\betax2+\gammax)
\Psi{\left(\alpha,
\gamma
\sqrt{\beta
2

\right)}}

, where

\Psi(\alpha,z)={}1\Psi

1\left(\begin{matrix}\left(\alpha,1
2

\right)\\(1,0)\end{matrix};z\right)

denotes the Fox–Wright Psi function.

References

External links

Notes and References

  1. Sun . Jingchao . Kong . Maiying . Pal . Subhadip . The Modified-Half-Normal distribution: Properties and an efficient sampling scheme . Communications in Statistics - Theory and Methods . 22 June 2021 . 52 . 5 . 1591–1613 . 10.1080/03610926.2021.1934700 . 237919587 . 0361-0926.
  2. Sun . Jingchao . Kong . Maiying . Pal . Subhadip . The Modified-Half-Normal distribution: Properties and an efficient sampling scheme . Communications in Statistics - Theory and Methods . 22 June 2021 . 52 . 5 . 1591–1613 . 10.1080/03610926.2021.1934700 . 237919587 . 0361-0926.