Convergence proof techniques explained

Convergence proof techniques are canonical components of mathematical proofs that sequences or functions converge to a finite limit when the argument tends to infinity.

There are many types of series and modes of convergence requiring different techniques. Below are some of the more common examples. This article is intended as an introduction aimed to help practitioners explore appropriate techniques. The links below give details of necessary conditions and generalizations to more abstract settings. The convergence of series is already covered in the article on convergence tests.

Convergence in Rn

It is common to want to prove convergence of a sequence

f:NRn

or function

f:RRn

, where

N

and

R

refer to the natural numbers and the real numbers, and convergence is with respect to the Euclidean norm,

||||2

.

Useful approaches for this are as follows.

First principles

The analytic definition of convergence of

f

to a limit

finfty

is that[1] for all

\epsilon

there exists a

k0

such for all

k>k0

,

\|f(k)-finfty\|<\epsilon

. The most basic proof technique is to find such a

k0

and prove the required inequality. If the value of

finfty

is not known in advance, the techniques below may be useful.

Contraction mappings

In many cases, the function whose convergence is of interest has the form

f(k+1)=T(f(k))

for some transformation

T

. For example,

T

could map

f(k)

to

f(k+1)=Af(k)

for some conformable matrix

A

. Alternatively,

T

may be an element-wise operation, such as replacing each element of

f(k)

by the square root of its magnitude.

In such cases, if the problem satisfies the conditions of Banach fixed-point theorem (the domain is a non-empty complete metric space) then it is sufficient to prove that

\|T(x)-T(y)\|<\|k(x-y)\|

for some constant

|k|<1

which is fixed for all

x

and

y

. Such a

T

is called a contraction mapping. The composition of two contraction mappings is a contraction mapping, so if

T=T1\circT2

, then it is sufficient to show that

T1

and

T2

are both contraction mappings.

Example

Famous example of the use of this approach include

T

has the form

T(x)=Ax+B

for some matrices

A

and

B

, then convergence to

(I-A)-1B

occurs if the magnitudes of all eigenvalues of

A

are less than 1.

Non-expansion mappings

If both above inequalities are weak (

k\le1

), the mapping is a non-expansion mapping.It is not sufficient for

T

to be a non-expansion mapping. For example,

T(x)=-x

is a non-expansion mapping, but the sequence does not converge.However, the composition of a contraction mapping and a non-expansion mapping (or vice versa) is a contraction mapping.

Contraction mappings on limited domains

If

T

is not a contraction mapping on its entire domain, but it is on its codomain (the image of the domain), that is also sufficient for convergence.This also applies for decompositions.For example, consider

T(x)=\cos(\sin(x))

. The function

\cos

is not a contraction mapping, but it is on the restricted domain

[-1,1]

, which is the codomain of

\sin

for real arguments. Since

\sin

is a non-expansion mapping, this implies

T

is a contraction mapping.

Convergent subsequences

Every bounded sequence in

Rn

has a convergent subsequence, by the Bolzano–Weierstrass theorem. If these all have the same limit, then the original sequence converges to that limit. If it can be shown that all of the subsequences of

f

have the same limit, such as by showing that there is a unique fixed point of the transformation

T

, then the initial sequence must also converge to that limit.

Monotonicity (Lyapunov functions)

Every bounded monotonic sequence in

Rn

converges to a limit.

This approach can also be applied to sequences that are not monotonic. Instead, it is possible to define a function

V:RnR

such that

V(f(n))

is monotonic in

n

. If the

V

satisfies the conditions to be a Lyapunov function then

f

is convergent. Lyapunov's theorem is normally stated for ordinary differential equations, but can also be applied to sequences of iterates by replacing derivatives with discrete differences.

The basic requirements on

V

are that

V(f(n+1))-V(f(n))<0

for

f(n)\ne0

and

V(0)=0

(or
V

(x)<0

for

x\ne0

)

V(x)>0

for all

x\ne0

and

V(0)=0

V

be "radially unbounded", so that

V(x)

goes to infinity for any sequence with

\|x\|

that tends to infinity.

In many cases, a Lyapunov function of the form

V(x)=xTAx

can be found, although more complex forms are also used.

For delay differential equations, a similar approach applies with Lyapunov functions replaced by Lyapunov functionals also called Lyapunov-Krasovskii functionals.

If the inequality in the condition 1 is weak, LaSalle's invariance principle may be used.

Convergence of sequences of functions

To consider the convergence of sequences of functions,[2] it is necessary to define a distance between functions to replace the Euclidean norm. These often include

||f(n)-finfty||f0

. For this case, all of the above techniques can be applied with this function norm.

x

,

fn(x)finfty(x)

. For this case, the above techniques can be applied for each point

x

with the norm appropriate for

f(x)

.

\limn\toinfty\sup\{\left|fn(x)-finfty(x)\right|:x\inA\}=0,

where

A

is the domain of each

fn

.

See also

Convergence of random variables

Random variables[3] are more complicated than simple elements of

Rn

. (Formally, a random variable is a mapping

x:\OmegaV

from an event space

\Omega

to a value space

V

. The value space may be

Rn

, such as the roll of a dice, and such a random variable is often spoken of informally as being in

Rn

, but convergence of sequence of random variables corresponds to convergence of the sequence of functions, or the distributions, rather than the sequence of values.)

There are multiple types of convergence, depending on how the distance between functions is measured.

xn:\OmegaV

to the limit, except at a set in

\Omega

with measure 0 in the limit.

Each has its own proof techniques, which are beyond the current scope of this article.

See also

Topological convergence

For all of the above techniques, some form the basic analytic definition of convergence above applies. However, topology has its own definition of convergence. For example, in a non-hausdorff space, it is possible for a sequence to converge to multiple different limits.

Notes and References

  1. Book: Ross, Kenneth. Elementary Analysis: The Theory of Calculus. Springer.
  2. Book: Haase, Markus. Functional Analysis: An Elementary Introduction. American Mathematics Society.
  3. Book: Billingsley, Patrick. Probability and Measure. 1995. John Wesley.