Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces.
The theorem was proven in 1967 by János Komlós.[1] There exists also a generalization from 1970 by Srishti D. Chatterji.[2]
Let
(\Omega,l{F},P)
\xi1,\xi2,...
\sup\limitsnE[|\xin|]<infty.
Then there exists a random variable
\psi\inL1(P)
(ηk)=(\xi
nk |
)
(\tilde{η}n)=(η
kn |
)
n\toinfty
(\tilde{η | |
1+ … |
+\tilde{η}n)}{n}\to\psi
P
Let
(E,l{A},\mu)
f1,f2,...
L1(\mu)
\sup\limitsn\intE|fn|d\mu<infty
\upsilon\inL1(\mu)
(gk)=(f
nk |
)
(\tilde{g}n)=(g
kn |
)
n\toinfty
(\tilde{g | |
1+ … |
+\tilde{g}n)}{n}\to\upsilon
\mu
So the theorem says, that the sequence
(ηk)