Characteristic function (probability theory) explained

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.

The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.

Introduction

The characteristic function is a way to describe a random variable.The characteristic function,

\varphiX(t)=\operatorname{E}\left[eitX\right],

a function of,determines the behavior and properties of the probability distribution of the random variable .It is equivalent to a probability density function or cumulative distribution function in the sense that knowing one of the functions it is always possible to find the others, yet they provide different insights for understanding the features of the random variable. Moreover, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions.

MX(t)

, then the domain of the characteristic function can be extended to the complex plane, and

\varphiX(-it)=MX(t).

Note however that the characteristic function of a distribution is well defined for all real values of, even when the moment-generating function is not well defined for all real values of .

The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables.

Definition

For a scalar random variable the characteristic function is defined as the expected value of, where is the imaginary unit, and is the argument of the characteristic function:

\begin{cases}\displaystyle\varphiX:R\toC\\displaystyle\varphiX(t)=\operatorname{E}\left[eitX\right]=\intReitxdFX(x)=\intReitxfX(x)dx=

1
\int
0
itQX(p)
e

dp\end{cases}

Here is the cumulative distribution function of, is the corresponding probability density function, is the corresponding inverse cumulative distribution function also called the quantile function,[1] and the integrals are of the Riemann–Stieltjes kind. If a random variable has a probability density function then the characteristic function is its Fourier transform with sign reversal in the complex exponential. This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform. For example, some authors define, which is essentially a change of parameter. Other notation may be encountered in the literature: \scriptstyle\hat p as the characteristic function for a probability measure, or \scriptstyle\hat f as the characteristic function corresponding to a density .

Generalizations

The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where the random variable takes its values. For common cases such definitions are listed below:

Examples

DistributionCharacteristic function

\phi(t)

eita

1-p+peit

(1-p+peit)n

\left(p
1-eit+peit

\right)r

λ(eit-1)
e
eitb-eita
it(b-a)
eita-eit(b
(1-eit)(b-a+1)
eit\mu
1+b2t2
Logistic

ei\mu

\pist
\sinh(\pist)
it\mu-
1
2
\sigma2t2
e

(1-2it)-k/2

Noncentral chi-squared
2
{\chi'
k}
iλt
1-2it
e

(1-2it)-k/2

Generalized chi-squared

\tilde{\chi}(\boldsymbol{w},\boldsymbol{k},\boldsymbol{λ},s,m)

\exp\left[it\left(m+\sumj
wjλj
1-2iwjt
\right)-s2t2
2
\right]
\prod\left(1-2iwjt
kj/2
\right)
j

eit\mu

(1-it\theta)-k

(1-itλ-1)-1

Geometric
(number of failures)
p
1-eit(1-p)
Geometric
(number of trials)
p
e-it-(1-p)
i{tT\boldsymbol{\mu
e
}-\frac \mathbf^\boldsymbol \mathbf}
Multivariate Cauchy [2]
itT\boldsymbol\mu-\sqrt{tT\boldsymbol{\Sigma
e

t

}}
Oberhettinger (1973) provides extensive tables of characteristic functions.

Properties

\varphi
X1
=\varphi
X2
.

X

and

Y

be two random variables with characteristic functions

\varphiX

and

\varphiY

.

X

and

Y

are independent if and only if

\varphiX,(s,t)=\varphiX(s)\varphiY(t)forall(s,t)\inR2

.

Y=aX+b

be the linear transformation of a random variable

X

. The characteristic function of

Y

is
itb
\varphi
Y(t)=e

\varphiX(at)

. For random vectors

X

and

Y=AX+B

(where is a constant matrix and a constant vector), we have

\varphiY(t)=

it\topB
e
\top
\varphi
X(A

t)

.[3]

Continuity

The bijection stated above between probability distributions and characteristic functions is sequentially continuous. That is, whenever a sequence of distribution functions converges (weakly) to some distribution, the corresponding sequence of characteristic functions will also converge, and the limit will correspond to the characteristic function of law . More formally, this is stated as

Lévy’s continuity theorem: A sequence of -variate random variables converges in distribution to random variable if and only if the sequence converges pointwise to a function which is continuous at the origin. Where is the characteristic function of .

This theorem can be used to prove the law of large numbers and the central limit theorem.

Inversion formula

There is a one-to-one correspondence between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute when we know the distribution function (or density). If, on the other hand, we know the characteristic function and want to find the corresponding distribution function, then one of the following inversion theorems can be used.

Theorem. If the characteristic function of a random variable is integrable, then is absolutely continuous, and therefore has a probability density function. In the univariate case (i.e. when is scalar-valued) the density function is given by f_X(x) = F_X'(x) = \frac\int_ e^\varphi_X(t)\,dt.

In the multivariate case it is f_X(x) = \frac \int_ e^\varphi_X(t)\lambda(dt)

where t\cdot x is the dot product.

The density function is the Radon–Nikodym derivative of the distribution with respect to the Lebesgue measure : f_X(x) = \frac(x).

Theorem (Lévy). If is characteristic function of distribution function, two points are such that is a continuity set of (in the univariate case this condition is equivalent to continuity of at points and), then

F(b)

by taking

a

such that

F(a)=0.

Otherwise, if a random variable is not bounded from below, the limit for

a\to-infty

gives

F(b)

, but is numerically impractical.

Theorem. If is (possibly) an atom of (in the univariate case this means a point of discontinuity of) then

Theorem (Gil-Pelaez). For a univariate random variable, if is a continuity point of then

FX(x)=

1
2

-

1
\pi
infty
\int
0
\operatorname{Im
[e

-itx\varphiX(t)]}{t}dt

where the imaginary part of a complex number

z

is given by

Im(z)=(z-z*)/2i

.

And its density function is:

fX(x)=

1
\pi
infty
\int
0

\operatorname{Re}[e-itx\varphiX(t)]dt

The integral may be not Lebesgue-integrable; for example, when is the discrete random variable that is always 0, it becomes the Dirichlet integral.

Inversion formulas for multivariate distributions are available.

Criteria for characteristic functions

The set of all characteristic functions is closed under certain operations:

\bar{\varphi}

,, and are also characteristic functions.

It is well known that any non-decreasing càdlàg function with limits, corresponds to a cumulative distribution function of some random variable. There is also interest in finding similar simple criteria for when a given function could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.

Bochner’s theorem. An arbitrary function is the characteristic function of some random variable if and only if is positive definite, continuous at the origin, and if .

Khinchine’s criterion. A complex-valued, absolutely continuous function, with, is a characteristic function if and only if it admits the representation

\varphi(t)=\intRg(t+\theta)\overline{g(\theta)}d\theta.

Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function, with, is a characteristic function if and only if

(-1)n\left(\intR

-t2/2
\varphi(pt)e

H2n(t)dt\right)\geq0

for, and all . Here denotes the Hermite polynomial of degree .

Pólya’s theorem. If

\varphi

is a real-valued, even, continuous function which satisfies the conditions

\varphi(0)=1

,

\varphi

is convex for

t>0

,

\varphi(infty)=0

,then is the characteristic function of an absolutely continuous distribution symmetric about 0.

Uses

Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.

Basic manipulations of distributions

Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if,, ..., is a sequence of independent (and not necessarily identically distributed) random variables, and

Sn=

n
\sum
i=1

aiXi,

where the are constants, then the characteristic function for is given by

\varphi
Sn
(t)=\varphi
X1

(a1t)\varphi

X2

(a2t)

\varphi
Xn

(ant)

In particular, . To see this, write out the definition of characteristic function:

\varphiX+Y(t)=\operatorname{E}\left[eit(X+Y)\right]=\operatorname{E}\left[eitXeitY\right]=\operatorname{E}\left[eitX\right]\operatorname{E}\left[eitY\right]=\varphiX(t)\varphiY(t)

The independence of and is required to establish the equality of the third and fourth expressions.

Another special case of interest for identically distributed random variables is when and then Sn is the sample mean. In this case, writing for the mean,

\varphi\overline{X

}(t)= \varphi_X\!\left(\tfrac \right)^n

Moments

Characteristic functions can also be used to find moments of a random variable. Provided that the -th moment exists, the characteristic function can be differentiated times:

\operatorname\left[X^n\right] = i^\left[\frac{d^n}{dt^n}\varphi_X(t)\right]_ = i^\varphi_X^(0),\!

This can be formally written using the derivatives of the Dirac delta function:f_X(x) = \sum_^\infty \frac\delta^(x)\operatorname[X^n]which allows a formal solution to the moment problem.For example, suppose has a standard Cauchy distribution. Then . This is not differentiable at, showing that the Cauchy distribution has no expectation. Also, the characteristic function of the sample mean of independent observations has characteristic function, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.

As a further example, suppose follows a Gaussian distribution i.e.

X\siml{N}(\mu,\sigma2)

. Then

\varphiX(t)=

\muit-
1
2
\sigma2t2
e

and

\operatorname{E}\left[X\right]=i-1\left[

d
dt

\varphiX(t)\right]t=0=i-1\left[(i\mu-\sigma2t)\varphiX(t)\right]t=0=\mu

A similar calculation shows

\operatorname{E}\left[X2\right]=\mu2+\sigma2

and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate

\operatorname{E}\left[X2\right]

.

The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the second cumulant generating function.

Data analysis

Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit time series models where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020) and Li et al. (2020) for training generative adversarial networks.

Example

The gamma distribution with scale parameter θ and a shape parameter has the characteristic function

(1-\thetait)-k.

Now suppose that we have

X~\sim\Gamma(k1,\theta)andY\sim\Gamma(k2,\theta)

with and independent from each other, and we wish to know what the distribution of is. The characteristic functions are

\varphiX(t)=(1-\thetai

-k1
t)

,    \varphiY(t)=(1-\theta

-k2
it)
which by independence and the basic properties of characteristic function leads to

\varphiX+Y(t)=\varphiX(t)\varphiY(t)=(1-\thetai

-k1
t)

(1-\thetai

-k2
t)

=\left(1-\thetai

-(k1+k2)
t\right)

.

This is the characteristic function of the gamma distribution scale parameter and shape parameter, and we therefore conclude

X+Y\sim\Gamma(k1+k2,\theta)

The result can be expanded to independent gamma distributed random variables with the same scale parameter and we get

\foralli\in\{1,\ldots,n\}:Xi\sim\Gamma(ki,\theta)      

n
\sum
i=1

Xi\sim

nk
\Gamma\left(\sum
i,\theta\right).

Entire characteristic functions

As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytic continuation, in cases where this is possible.

Related concepts

Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.

The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function is the complex conjugate of the continuous Fourier transform of (according to the usual convention; see continuous Fourier transform – other conventions).

\varphiX(t)=\langleeitX\rangle=\intReitxp(x)dx=\overline{\left(\intRe-itxp(x)dx\right)}=\overline{P(t)},

where denotes the continuous Fourier transform of the probability density function . Likewise, may be recovered from through the inverse Fourier transform:

p(x)=

1
2\pi

\intReitxP(t)dt=

1
2\pi

\intReitx\overline{\varphiX(t)}dt.

Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.

Another related concept is the representation of probability distributions as elements of a reproducing kernel Hilbert space via the kernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of the kernel function.

See also

References

Sources

External links

Notes and References

  1. 0903.1592 . q-fin.CP . W. T. . Shaw . J. . McCabe . Monte Carlo sampling given a Characteristic Function: Quantile Mechanics in Momentum Space . 2009.
  2. using 1 as the number of degree of freedom to recover the Cauchy distribution
  3. Web site: Joint characteristic function . 7 April 2018 . www.statlect.com.