Random sequence explained

The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".[1]

Axiomatic probability theory deliberately avoids a definition of a random sequence.[2] Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement "let us consider a random sequence" an abuse of language.[3]

Early history

Émile Borel was one of the first mathematicians to formally address randomness in 1909.[4] In 1919 Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased.[5]

The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability.[6] This definition is often called Mises–Church randomness.

Modern approaches

During the 20th century various technical approaches to defining random sequences were developed and now three distinct paradigms can be identified. In the mid 1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule.[7] [8] In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet. This definition is often called Kolmogorov–Loveland stochasticity. But this method was considered too weak by Alexander Shen who showed that there is a Kolmogorov–Loveland stochastic sequence which does not conform to the general notion of randomness.

In 1966 Per Martin-Löf introduced a new notion which is now generally considered the most satisfactory notion of algorithmic randomness. His original definition involved measure theory, but it was later shown that it can be expressed in terms of Kolmogorov complexity. Kolmogorov's definition of a random string was that it is random if has no description shorter than itself via a universal Turing machine.[9]

Three basic paradigms for dealing with random sequences have now emerged:[10]

In most cases, theorems relating the three paradigms (often equivalence) have been proven.[14]

See also

References

External links

Notes and References

  1. "What is meant by the word Random" in Mathematics and common sense by Philip J. Davis 2006 pages 180-182
  2. Inevitable Randomness in Discrete Mathematics by József Beck 2009 page 44
  3. Algorithms: main ideas and applications by Vladimir Andreevich Uspenskiĭ, Alekseĭ, Lʹvovich Semenov 1993 Springer page 166
  4. E. Borel, Les probabilites denombrables et leurs applications arithmetique Rend. Circ. Mat. Palermo 27 (1909) 247–271
  5. Laurant Bienvenu "Kolmogorov Loveland Stochasticity" in STACS 2007: 24th Annual Symposium on Theoretical Aspects of Computer Science by Wolfgang Thomas page 260
  6. Church . Alonzo . Alonzo Church . 1940 . On the Concept of Random Sequence . 10.1090/S0002-9904-1940-07154-X. Bull. Amer. Math. Soc. . 46 . 2. 130–136 . free .
  7. A. N. Kolmogorov, Three approaches to the quantitative definition of information Problems of Information and Transmission, 1(1):1–7, 1965.
  8. D.W. Loveland, A new interpretation of von Mises' concept of random sequence Z. Math. Logik Grundlagen Math 12 (1966) 279–294
  9. An introduction to Kolmogorov complexity and its applications by Ming Li, P. M. B. Vitányi 1997 0387948686 pages 149–151
  10. R. Downey, Some Recent Progress in Algorithmic Randomness in Mathematical foundations of computer science 2004: by Jiří Fiala, Václav Koubek 2004 page 44
  11. Schnorr . C. P. . 1971 . A unified approach to the definition of a random sequence . Mathematical Systems Theory . 5 . 3. 246–258 . 10.1007/bf01694181. 8931514 .
  12. Yongge Wang: Randomness and Complexity. PhD Thesis, 1996. http://webpages.uncc.edu/yonwang/papers/IPL97.pdf
  13. Wang . Yongge . 1999 . A separation of two randomness concepts . Information Processing Letters . 69 . 3. 115–118 . 10.1016/S0020-0190(98)00202-6. 10.1.1.46.199 .
  14. Wolfgang Merkle, Kolmogorov Loveland Stochasticity in Automata, languages and programming: 29th international colloquium, ICALP 2002, by Peter Widmayer et al. page 391