Method of moments (probability theory) explained

In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences.[1] Suppose X is a random variable and that all of the moments

\operatorname{E}(Xk)

exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments(cf. the problem of moments). If

\limn\toinfty

k)
\operatorname{E}(X
n

=\operatorname{E}(Xk)

for all values of k, then the sequence converges to X in distribution.

The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé.[2] More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices.[3]

Notes and References

  1. Book: Prokhorov, A.V.. Moments, method of (in probability theory). Encyclopaedia of Mathematics (online). 1-4020-0609-8. 1375697. M. Hazewinkel.
  2. Book: Fischer, H.. 2743162. A history of the central limit theorem. From classical to modern probability theory.. Sources and Studies in the History of Mathematics and Physical Sciences. Springer. New York. 2011. 978-0-387-87856-0. 4. Chebyshev's and Markov's Contributions..
  3. Book: Anderson, G.W.. Guionnet. A.. Zeitouni. O.. An introduction to random matrices.. 2010. Cambridge University Press. Cambridge. 978-0-521-19452-5. 2.1.