Gaussian integral explained

f(x)=

-x2
e
over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is\int_^\infty e^\,dx = \sqrt.

Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809.[1] The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.

Although no elementary function exists for the error function, as can be proven by the Risch algorithm,[2] the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for\int e^\,dx,but the definite integral\int_^\infty e^\,dxcan be evaluated. The definite integral of an arbitrary Gaussian function is\int_^ e^\,dx= \sqrt.

Computation

By polar coordinates

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[3] is to make use of the property that:

\left(\int_^ e^\,dx\right)^2 = \int_^ e^\,dx \int_^ e^\,dy = \int_^ \int_^ e^\, dx\,dy.

Consider the function

-\left(x2+y2\right)
e

=

-r2
e
on the plane

R2

, and compute its integral two ways:
  1. on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: \left(\int e^\,dx\right)^2;
  2. on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be

\pi

Comparing these two computations yields the integral, though one should take care about the improper integrals involved.

\begin \iint_ e^dx\,dy &= \int_0^ \int_0^ e^r\,dr\,d\theta\\[6pt] &= 2\pi \int_0^\infty re^\,dr\\[6pt] &= 2\pi \int_^0 \tfrac e^s\,ds && s = -r^2\\[6pt] &= \pi \int_^0 e^s\,ds \\[6pt] &= \lim_\pi \left(e^0 - e^x\right) \\[6pt] &=\pi, \endwhere the factor of is the Jacobian determinant which appears because of the transform to polar coordinates (is the standard measure on the plane, expressed in polar coordinates), and the substitution involves taking, so .

Combining these yields\left (\int_^\infty e^\,dx \right)^2=\pi,so\int_^\infty e^ \, dx = \sqrt.

Complete proof

To justify the improper double integrals and equating the two expressions, we begin with an approximating function:I(a) = \int_^a e^dx.

If the integral\int_^\infty e^ \, dxwere absolutely convergent we would have that its Cauchy principal value, that is, the limit\lim_ I(a) would coincide with\int_^\infty e^\,dx.To see that this is the case, consider that

\int_^\infty \left|e^\right| dx < \int_^ -x e^\, dx + \int_^1 e^\, dx+ \int_^ x e^\, dx < \infty .

So we can compute\int_^\infty e^ \, dxby just taking the limit\lim_ I(a).

Taking the square of

I(a)

yields

\beginI(a)^2 & = \left (\int_^a e^\, dx \right) \left (\int_^a e^\, dy \right) \\[6pt]& = \int_^a \left (\int_^a e^\, dy \right)\,e^\, dx \\[6pt]& = \int_^a \int_^a e^\,dy\,dx.\end

Using Fubini's theorem, the above double integral can be seen as an area integral\iint_ e^\,d(x,y),taken over a square with vertices on the xy-plane.

Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than

I(a)2

, and similarly the integral taken over the square's circumcircle must be greater than

I(a)2

. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:

\beginx & = r \cos \theta \\y & = r \sin\theta\end\mathbf J(r, \theta) = \begin \dfrac & \dfrac\\[1em] \dfrac & \dfrac \end= \begin \cos\theta & - r\sin \theta \\ \sin\theta & r\cos \theta\endd(x,y) = |J(r, \theta)|d(r,\theta) = r\, d(r,\theta).\int_0^ \int_0^a re^ \, dr \, d\theta < I^2(a) < \int_0^ \int_0^ re^ \, dr\, d\theta.

(See to polar coordinates from Cartesian coordinates for help with polar transformation.)

Integrating,\pi \left(1-e^\right) < I^2(a) < \pi \left(1 - e^\right).

By the squeeze theorem, this gives the Gaussian integral\int_^\infty e^\, dx = \sqrt.

By Cartesian coordinates

A different technique, which goes back to Laplace (1812), is the following. Let\beginy & = xs \\dy & = x\,ds.\end

Since the limits on as depend on the sign of, it simplifies the calculation to use the fact that is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,

\int_^ e^ \, dx = 2\int_^ e^\,dx.

Thus, over the range of integration,, and the variables and have the same limits. This yields:\beginI^2 &= 4 \int_0^\infty \int_0^\infty e^ dy\,dx \\[6pt]&= 4 \int_0^\infty \left(\int_0^\infty e^ \, dy \right) \, dx \\[6pt]&= 4 \int_0^\infty \left(\int_0^\infty e^ x\,ds \right) \, dx \\[6pt]\endThen, using Fubini's theorem to switch the order of integration:\beginI^2 &= 4 \int_0^\infty \left(\int_0^\infty e^ x \, dx \right) \, ds \\[6pt]&= 4 \int_0^\infty \left[\frac{e^{-x^2\left(1+s^2\right)} }{-2 \left(1+s^2\right)} \right]_^ \, ds \\[6pt]&= 4 \left (\frac \int_0^\infty \frac \right) \\[6pt]&= 2 \arctan(s)\Big |_0^\infty \\[6pt]&= \pi.\end

Therefore,

I=\sqrt{\pi}

, as expected.

In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider

-x2
e

1-x2(1+x2)-1

.

In fact, since

(1+t)e-t\leq1

for all

t

, we have the exact bounds:1-x^2 \leq e^ \leq (1+x^2)^Then we can do the bound at Laplace approximation limit:\int_(1-x^2)^n dx \leq \int_e^ dx \leq \int_(1+x^2)^ dx

That is,2\sqrt n\int_(1-x^2)^n dx \leq \int_e^ dx \leq 2\sqrt n\int_(1+x^2)^ dx

By trigonometric substitution, we exactly compute those two bounds:

2\sqrtn(2n)!!/(2n+1)!!

and

2\sqrtn(\pi/2)(2n-3)!!/(2n-2)!!

By taking the square root of the Wallis formula, \frac \pi 2 = \prod_ \fracwe have

\sqrt\pi=\limn\to2\sqrt{n}

(2n)!!
(2n+1)!!
, the desired lower bound limit. Similarly we can get the desired upper bound limit.Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.

Relation to the gamma function

The integrand is an even function,

\int_^ e^ dx = 2 \int_0^\infty e^ dx

Thus, after the change of variable x = \sqrt, this turns into the Euler integral

2 \int_0^\infty e^ dx=2\int_0^\infty \frac\ e^ \ t^ dt = \Gamma\left(\frac\right) = \sqrt

where \Gamma(z) = \int_^ t^ e^ dt is the gamma function. This shows why the factorial of a half-integer is a rational multiple of \sqrt \pi. More generally,\int_0^\infty x^n e^ dx = \frac, which can be obtained by substituting

t=axb

in the integrand of the gamma function to get \Gamma(z) = a^z b \int_0^ x^ e^ dx .

Generalizations

The integral of a Gaussian function

See main article: Integral of a Gaussian function. The integral of an arbitrary Gaussian function is\int_^ e^\,dx= \sqrt.

An alternative form is\int_^e^\,dx=\sqrt\,e^.

This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.

Complex form

See main article: Fresnel integral. \int_^ e^ dt = e^ \sqrtand more generally,\int_ e^dx = \det(A)^ (e^ \sqrt)^Nfor any positive-definite symmetric matrix

A

.

n-dimensional and functional generalization

See main article: multivariate normal distribution. Suppose A is a symmetric positive-definite (hence invertible) precision matrix, which is the matrix inverse of the covariance matrix. Then,

\int_ \exp \, d^n x = \int_ \exp \, d^n x = \sqrt =\sqrt =\sqrtBy completing the square, this generalizes to\int_ \exp \, d^n x = \sqrt e^

This fact is applied in the study of the multivariate normal distribution.

Also,\int x_\cdots x_ \, \exp \, d^nx =\sqrt \, \frac \, \sum_(A^)_ \cdots (A^)_where σ is a permutation of and the extra factor on the right-hand side is the sum over all combinatorial pairings of of N copies of A−1.

Alternatively,[4]

\int f(\vec x) \exp d^nx=\sqrt \, \left. \exp f(\vec)\right|_

for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.

While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that

(2\pi)infty

is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:

\begin& \frac \\[6pt]= & \frac\sum_A^(x_,x_)\cdots A^(x_,x_).\end

In the DeWitt notation, the equation looks identical to the finite-dimensional case.

n-dimensional with linear term

If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)\int \exp\left(-\frac\sum_^A_ x_i x_j+\sum_^B_i x_i\right) d^n x=\int e^ d^n x= \sqrte^.

Integrals of similar form

\int_0^\infty x^ e^\,dx = \sqrt\frac\int_0^\infty x^ e^\,dx = \frac a^\int_0^\infty x^e^\,dx = \frac \sqrt\int_0^\infty x^e^\,dx = \frac\int_0^\infty x^e^\,dx = \fracwhere

n

is a positive integer

An easy way to derive these is by differentiating under the integral sign.

\begin\int_^\infty x^ e^\,dx&= \left(-1\right)^n\int_^\infty \frac e^\,dx \\&= \left(-1\right)^n\frac \int_^\infty e^\,dx\\[6pt]&= \sqrt \left(-1\right)^n\frac\alpha^ \\&= \sqrt\frac\end

One could also integrate by parts and find a recurrence relation to solve this.

Higher-order polynomials

Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant,zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.[5]

Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is

\int_^ e^\,dx = \frac e^f \sum_^ \frac \frac \frac \frac.

The mod 2 requirement is because the integral from −∞ to 0 contributes a factor of to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.

See also

References

Sources

Notes and References

  1. Web site: The Evolution of the Normal Distribution . MAA.org . Saul. Stahl. April 2006. May 25, 2018.
  2. G. W. . Cherry . Integration in Finite Terms with Special Functions: the Error Function . Journal of Symbolic Computation . 1 . 3 . 1985 . 283–302 . 10.1016/S0747-7171(85)80037-7 . free .
  3. Web site: The Probability Integral . Lee . Peter M. .
  4. Web site: Reference for Multidimensional Gaussian Integral . March 30, 2012 . .
  5. Morozov . A. . Shakirove . Sh. . Journal of High Energy Physics . 002 . Introduction to integral discriminants . 10.1088/1126-6708/2009/12/002 . 2009 . 2009 . 12 . 0903.2595 . 2009JHEP...12..002M .