Darmois–Skitovich theorem explained

In mathematical statistics, the Darmois–Skitovich theorem characterizes the normal distribution (the Gaussian distribution) by the independence of two linear forms from independent random variables. This theorem was proved independently by G. Darmois and V. P. Skitovich in 1953.[1] [2]

Formulation

Let

\xij,j=1,2,\ldots,n,n\ge2

  be independent random variables. Let

\alphaj,\betaj

  be nonzero constants. If the linear forms

L1=\alpha1\xi1++\alphan\xin

and

L2=\beta1\xi1++\betan\xin

are independent then all random variables

\xij

have normal distributions (Gaussian distributions).

History

The Darmois–Skitovich theorem is a generalization of the Kac–Bernstein theorem in which the normal distribution (the Gaussian distribution) is characterized by the independence of the sum and the difference of two independent random variables. For a history of proving the theorem by V. P. Skitovich, see the article [3]

References

Notes and References

  1. Darmois . G. . Georges Darmois . Analyse générale des liaisons stochastiques: etude particulière de l'analyse factorielle linéaire . Review of the International Statistical Institute . 1953 . 21 . 1/2 . 2 - 8 . 10.2307/1401511 . 1401511.
  2. Skitovich . V. P. . On a property of the normal distribution . 1953 . . 89 . 217 - 219 . ru.
  3. Web site: О теорем Дармуа-Скитовича. ru. www.apmath.spbu.ru.