Uniform integrability explained

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

Measure-theoretic definition

Uniform integrability is an extension to the notion of a family of functions being dominated in

L1

which is central in dominated convergence.Several textbooks on real analysis and measure theory use the following definition:[1] [2]

Definition A: Let

(X,ak{M},\mu)

be a positive measure space. A set

\Phi\subsetL1(\mu)

is called uniformly integrable if

\supf\in\Phi

\|f\|
L1(\mu)

<infty

, and to each

\varepsilon>0

there corresponds a

\delta>0

such that

\intE|f|d\mu<\varepsilon

whenever

f\in\Phi

and

\mu(E)<\delta.

Definition A is rather restrictive for infinite measure spaces. A more general definition[3] of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.

Definition H: Let

(X,ak{M},\mu)

be a positive measure space. A set

\Phi\subsetL1(\mu)

is called uniformly integrable if and only if
inf
g\in
1
L
+(\mu)

\supf\in\Phi\int\{|f|>g\

}|f|\, d\mu=0

where

1
L
+(\mu)=\{g\in

L1(\mu):g\geq0\}

.

Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.

The following result[4] provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.

Theorem 1: If

(X,ak{M},\mu)

is a (positive) finite measure space, then a set

\Phi\subsetL1(\mu)

is uniformly integrable if and only if
inf
g\in
1
L
+(\mu)

\supf\in\Phi\int(|f|-g)+d\mu=0

If in addition

\mu(X)<infty

, then uniform integrability is equivalent to either of the following conditions

1.

infa>0\supf\in\int(|f|-a)+d\mu=0

.

2.

infa>0\supf\in\int\{|f|>a\

}|f|\,d\mu=0

When the underlying space

(X,ak{M},\mu)

is

\sigma

-finite, Hunt's definition is equivalent to the following:

Theorem 2: Let

(X,ak{M},\mu)

be a

\sigma

-finite measure space, and

h\inL1(\mu)

be such that

h>0

almost everywhere. A set

\Phi\subsetL1(\mu)

is uniformly integrable if and only if

\supf\in\Phi

\|f\|
L1(\mu)

<infty

, and for any

\varepsilon>0

, there exits

\delta>0

such that

\supf\in\Phi\intA|f|d\mu<\varepsilon

whenever

\intAhd\mu<\delta

.

A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking

h\equiv1

in Theorem 2.

Probability definition

In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables.,[5] [6] [7] that is,

1. A class

l{C}

of random variables is called uniformly integrable if:

M

such that, for every

X

in

l{C}

,

\operatornameE(|X|)\leqM

and

\varepsilon>0

there exists

\delta>0

such that, for every measurable

A

such that

P(A)\leq\delta

and every

X

in

l{C}

,

\operatornameE(|X|IA)\leq\varepsilon

.

or alternatively

2. A class

l{C}

of random variables is called uniformly integrable (UI) if for every

\varepsilon>0

there exists

K\in[0,infty)

such that

\operatornameE(|X|I|X|\geq)\le\varepsilonforallX\inl{C}

, where

I|X|\geq

is the indicator function

I|X|\geq=\begin{cases}1&if|X|\geqK,\ 0&if|X|<K.\end{cases}

.

Tightness and uniform integrability

One consequence of uniformly integrability of a class

l{C}

of random variables is that family of laws or distributions

\{P\circ|X|-1():X\inl{C}\}

is tight. That is, for each

\delta>0

, there exists

a>0

such that P(|X|>a) \leq \delta for all

X\inl{C}

.

This however, does not mean that the family of measures

l{V}l{C

}:=\Big\ is tight. (In any case, tightness would require a topology on

\Omega

in order to be defined.)

Uniform absolute continuity

There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral

Definition: Suppose

(\Omega,l{F},P)

is a probability space. A classed

l{C}

of random variables is uniformly absolutely continuous with respect to

P

if for any

\varepsilon>0

, there is

\delta>0

such that

E[|X|IA]<\varepsilon

whenever

P(A)<\delta

.

It is equivalent to uniform integrability if the measure is finite and has no atoms.

The term "uniform absolute continuity" is not standard, but is used by some authors.[8] [9]

Related corollaries

The following results apply to the probabilistic definition.

\geq K
)=0.

\Omega=[0,1]\subsetR

, and define X_n(\omega) = \begin n, & \omega\in (0,1/n), \\ 0, & \text \end Clearly

Xn\inL1

, and indeed

\operatornameE(|Xn|)=1 ,

for all n. However, \operatorname E(|X_n| I_)= 1\ \text n \ge K, and comparing with definition 1, it is seen that the sequence is not uniformly integrable.

L1

norm of all

Xn

s are 1 i.e., bounded. But the second clause does not hold as given any

\delta

positive, there is an interval

(0,1/n)

with measure less than

\delta

and

E[|Xm|:(0,1/n)]=1

for all

m\gen

.

X

is a UI random variable, by splitting \operatorname E(|X|) = \operatorname E(|X| I_)+\operatorname E(|X| I_) and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in

L1

.

Xn

is dominated by an integrable, non-negative

Y

: that is, for all ω and n, |X_n(\omega)| \le Y(\omega),\ Y(\omega)\ge 0,\ \operatorname E(Y) < \infty, then the class

l{C}

of random variables

\{Xn\}

is uniformly integrable.

Lp

(

p>1

) is uniformly integrable.

Relevant theorems

In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of

L1(\mu)

.

Xn\subsetL1(\mu)

is uniformly integrable if and only if it is relatively compact for the weak topology

\sigma(L1,Linfty)

.

\{X\alpha\}\alpha\in\Alpha\subsetL1(\mu)

is uniformly integrable if and only if there exists a non-negative increasing convex function

G(t)

such that \lim_ \frac t = \infty \text \sup_\alpha \operatorname E(G(|X_|)) < \infty.

Relation to convergence of random variables

See main article: Convergence of random variables. A sequence

\{Xn\}

converges to

X

in the

L1

norm if and only if it converges in measure to

X

and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.[14] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.

References

. Albert Nikolayevich Shiryaev. 1995. Probability. 2. Springer-Verlag. New York. 187–188. 978-0-387-94549-1.

Notes and References

  1. Book: Rudin, Walter. Walter Rudin

    . Walter Rudin. 1987. Real and Complex Analysis. 3. McGraw–Hill Book Co.. Singapore. 133. 0-07-054234-1.

  2. Book: Royden, H.L. . Fitzpatrick, P.M. . amp . 2010. Real Analysis. 4. Prentice Hall. Boston. 93. 978-0-13-143747-0.
  3. Book: Hunt, G. A.. 1966. Martingales et Processus de Markov. Dunod. Paris. 254.
  4. Book: Klenke, A.. 2008. Probability Theory: A Comprehensive Course. Springer Verlag. Berlin. 978-1-84800-047-6. 134–137.
  5. Book: Williams, David. Probability with Martingales. 1997. Cambridge Univ. Press.. Cambridge. 978-0-521-40605-5. 126–132. Repr..
  6. Book: Gut, Allan. Probability: A Graduate Course. 2005. Springer. 0-387-22833-0. 214–218.
  7. Book: Bass, Richard F.. Stochastic Processes. 2011. Cambridge University Press. Cambridge. 978-1-107-00800-7. 356–357.
  8. Book: Benedetto, J. J. . J. J. Benedetto

    . J. J. Benedetto . 1976. Real Variable and Integration. B. G. Teubner . Stuttgart. 89. 3-519-02209-5.

  9. Book: Burrill, C. W.. C. W. Burrill

    . C. W. Burrill. 1972. Measure, Integration, and Probability. McGraw-Hill. 180. 0-07-009223-0.

  10. Dunford. Nelson. 1938. Uniformity in linear spaces . Transactions of the American Mathematical Society . en . 44 . 2. 305–356 . 10.1090/S0002-9947-1938-1501971-X . 0002-9947 . free.
  11. Dunford. Nelson . 1939 . A mean ergodic theorem. Duke Mathematical Journal . en. 5 . 3. 635–646. 10.1215/S0012-7094-39-00552-1. 0012-7094.
  12. [Paul-André Meyer|Meyer, P.A.]
  13. Poussin . C. De La Vallee. 1915. Sur L'Integrale de Lebesgue . Transactions of the American Mathematical Society. 16 . 4 . 435–501 . 10.2307/1988879 . 1988879 . 10338.dmlcz/127627 . free.
  14. Book: Bogachev, Vladimir I.. The spaces Lp and spaces of measures . Measure Theory Volume I. 2007 . Springer-Verlag. Berlin Heidelberg. 978-3-540-34513-8. 268. 10.1007/978-3-540-34514-5_4.