In probability theory and directional statistics, the von Mises distribution (also known as the circular normal distribution or Tikhonov distribution) is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle
\theta
The von Mises probability density function for the angle x is given by:[2]
f(x\mid\mu,\kappa)= | \exp(\kappa\cos(x-\mu)) |
2\piI0(\kappa) |
where I0(
\kappa
The parameters μ and 1/
\kappa
\kappa
\kappa
\kappa
\kappa
\kappa
\kappa
\kappa
\kappa
The probability density can be expressed as a series of Bessel functions[3]
f(x\mid\mu,\kappa)=
1 | \left(1+ | |
2\pi |
2 | |
I0(\kappa) |
infty | |
\sum | |
j=1 |
Ij(\kappa)\cos[j(x-\mu)]\right)
where Ij(x) is the modified Bessel function of order j.
The cumulative distribution function is not analytic and is best found by integrating the above series. The indefinite integral of the probability density is:
\Phi(x\mid\mu,\kappa)=\intf(t\mid\mu,\kappa)dt=
1 | |
2\pi |
\left(x+
2 | |
I0(\kappa) |
infty | |
\sum | |
j=1 |
Ij(\kappa)
\sin[j(x-\mu)] | |
j |
\right).
The cumulative distribution function will be a function of the lower limit ofintegration x0:
F(x\mid\mu,\kappa)=\Phi(x\mid\mu,\kappa)-\Phi(x0\mid\mu,\kappa).
The moments of the von Mises distribution are usually calculated as the moments of the complex exponential z = e rather than the angle x itself. These moments are referred to as circular moments. The variance calculated from these moments is referred to as the circular variance. The one exception to this is that the "mean" usually refers to the argument of the complex mean.
The nth raw moment of z is:
mn=\langle
n\rangle=\int | |
z | |
\Gamma |
znf(x|\mu,\kappa)dx
=
I|n|(\kappa) | |
I0(\kappa) |
ei
where the integral is over any interval
\Gamma
I | ||||
|
\pi | |
\int | |
0 |
e\kappa\cos(x)\cos(nx)dx.
The mean of the complex exponential z is then just
m1=
I1(\kappa) | |
I0(\kappa) |
ei\mu
and the circular mean value of the angle x is then taken to be the argument μ. This is the expected or preferred direction of the angular random variables. The variance of z, or the circular variance of x is:
rm{var}(x)=1-E[\cos(x-\mu)] =1-
I1(\kappa) | |
I0(\kappa) |
.
When
\kappa
\kappa
f(x\mid\mu,\kappa) ≈
1 | |
\sigma\sqrt{2\pi |
where σ2 = 1/
\kappa
\kappa
\kappa
\lim\kappa → f(x\mid\mu,\kappa)=U(x)
where the interval for the uniform distribution
U(x)
2\pi
U(x)=1/(2\pi)
x
U(x)=0
x
A series of N measurements
i\thetan | |
z | |
n=e |
\overline{z}
\overline{z}= | 1 |
N |
N | |
\sum | |
n=1 |
zn
and its expectation value will be just the first moment:
\langle\overline{z}\rangle= | I1(\kappa) |
I0(\kappa) |
ei\mu.
In other words,
\overline{z}
\mu
[-\pi,\pi]
(\overline{z})
\mu
Viewing the
zn
\bar{R}2
\bar{R}2=\overline{z}\overline{z
| ||||
N | |
\sum | |
n=1 |
| ||||
\cos\theta | ||||
n\right) |
N | |
\sum | |
n=1 |
2 | |
\sin\theta | |
n\right) |
and its expectation value is [6]
\langle
| |||||
\bar{R} | + |
N-1 | |
N |
| |||||||
|
.
In other words, the statistic
| ||||
R | ||||
e |
| ||||
\left(\bar{R} |
\right)
will be an unbiased estimator of
| |||||||
|
R | ||||
|
\kappa
\kappa
\bar{R}= | I1(\kappa) |
I0(\kappa) |
\kappa
\kappa
\overline{z}=\bar{R}ei\overline{\theta
P(\bar{R},\bar{\theta})d\bar{R}d\bar{\theta}= | 1 | |||||||
|
\int\Gamma
N | |
\prod | |
n=1 |
\left(
\kappa\cos(\thetan-\mu) | |
e |
d\thetan\right)=
e\kappa\cos(\bar{\theta | |
-\mu)}}{I |
| ||||
0(\kappa) |
\int\Gamma
N | |
\prod | |
n=1 |
d\thetan\right)
where N is the number of measurements and
\Gamma
2\pi
\bar{R}
\bar{\theta}
\bar{R}
\bar{R}2=|\bar{z}|2=\left(
1 | |
N |
N | |
\sum | |
n=1 |
\cos(\thetan)\right)2+\left(
1 | |
N |
N | |
\sum | |
n=1 |
\sin(\thetan)\right)2
and
\overline{\theta}
\overline{\theta}=Arg(\overline{z}).
Note that product term in parentheses is just the distribution of the mean for a circular uniform distribution.[7]
This means that the distribution of the mean direction
\mu
VM(\mu,\kappa)
VM(\mu,\bar{R}N\kappa)
VM(\mu,R\kappa)
By definition, the information entropy of the von Mises distribution is[2]
H=-\int\Gammaf(\theta;\mu,\kappa)ln(f(\theta;\mu,\kappa))d\theta
where
\Gamma
2\pi
ln(f(\theta;\mu,\kappa))=-ln(2\piI0(\kappa))+\kappa\cos(\theta)
The characteristic function representation for the Von Mises distribution is:
f(\theta;\mu,\kappa)=
1 | |
2\pi |
infty\phi | |
\left(1+2\sum | |
n\cos(n\theta)\right) |
where
\phin=I|n|(\kappa)/I0(\kappa)
H=ln(2\piI0(\kappa))-\kappa\phi1=ln(2\pi
I | ||||
|
For
\kappa=0
ln(2\pi)
Notice that the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified[8] or, equivalently, the circular mean and circular variance are specified.