Laplace's method explained

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

b
\int
a

eMf(x)dx,

where

f(x)

is a twice-differentiable function,

M

is a large number, and the endpoints

a

and

b

could be infinite. This technique was originally presented in the book by .

In Bayesian statistics, Laplace's approximation can refer to either approximating the posterior normalizing constant with Laplace's method or approximating the posterior distribution with a Gaussian centered at the maximum a posteriori estimate.[1] [2] Laplace approximations are used in the integrated nested Laplace approximations method for fast approximations of Bayesian inference.

Concept

Suppose the function

f(x)

has a unique global maximum at

x0

. Let

M>0

be a constant and consider the following two functions:

\begin{align} g(x)&=Mf(x),\\ h(x)&=eMf(x). \end{align}

Note that

x0

will be the global maximum of

g

and

h

as well. Now observe:
\begin{align} g(x0)
g(x)

&=

Mf(x0)
Mf(x)

=

f(x0)
f(x)

,\\[4pt]

h(x0)
h(x)

&=

Mf(x0)
e
eM

=

M(f(x0)-f(x))
e

. \end{align}

As M increases, the ratio for

h

will grow exponentially, while the ratio for

g

does not change. Thus, significant contributions to the integral of this function will come only from points

x

in a neighbourhood of

x0

, which can then be estimated.

General theory

To state and motivate the method, one must make several assumptions. It is assumed that

x0

is not an endpoint of the interval of integration and that the values

f(x)

cannot be very close to

f(x0)

unless

x

is close to

x0

.

f(x)

can be expanded around x0 by Taylor's theorem,

f(x)=f(x0)+f'(x0)(x-x0)+

1
2

f''(x0)(x-x

2
0)

+R

where

R=

3\right)
O\left((x-x
0)
(see: big O notation).

Since

f

has a global maximum at

x0

, and

x0

is not an endpoint, it is a stationary point, i.e.

f'(x0)=0

. Therefore, the second-order Taylor polynomial approximating

f(x)

is

f(x)f(x0)+

1
2

f''(x0)

2.
(x-x
0)

Then, just one more step is needed to get a Gaussian distribution. Since

x0

is a global maximum of the function

f

it can be stated, by definition of the second derivative, that

f''(x0)\le0

, thus giving the relation

f(x)f(x0)-

1
2

|f''(x0)|

2
(x-x
0)

for

x

close to

x0

. The integral can then be approximated with:
b
\int
a

eMdx

Mf(x0)
e
b
\int
a
-1M|f''(x0)|
2
(x-x
0)
2
e

dx

If

f''(x0)<0

this latter integral becomes a Gaussian integral if we replace the limits of integration by

-infty

and

+infty

; when

M

is large this creates only a small error because the exponential decays very fast away from

x0

. Computing this Gaussian integral we obtain:
b
\int
a

eMdx\sqrt{

2\pi
M|f''(x0)|
}e^ \text M\to\infty.

A generalization of this method and extension to arbitrary precision is provided by the book .

Formal statement and proof

Suppose

f(x)

is a twice continuously differentiable function on

[a,b],

and there exists a unique point

x0\in(a,b)

such that:

f(x0)=maxxf(x)andf''(x0)<0.

Then:

\limn\toinfty

b
\intenf(x)dx
a
nf(x0)
e
\sqrt{2\pi
n\left(-f''(x0)\right)
}= 1.

Lower bound: Let

\varepsilon>0

. Since

f''

is continuous there exists

\delta>0

such that if

|x0-c|<\delta

then

f''(c)\gef''(x0)-\varepsilon.

By Taylor's Theorem, for any

x\in(x0-\delta,x0+\delta),

f(x)\gef(x0)+

1
2

(f''(x0)-

2.
\varepsilon)(x-x
0)

Then we have the following lower bound:

b
\begin{align} \int
a

enf(x)dx&\ge

x0+\delta
\int
x0-\delta

enf(x)dx\\ &\ge

nf(x0)
e
x0+\delta
\int
x0-\delta
n(f''(x0)-
2
\varepsilon)(x-x
0)
2
e

dx\\ &=

nf(x0)
e\sqrt{
1
n(-f''(x0)+\varepsilon)
} \int_^ e^ \, dy\end

where the last equality was obtained by a change of variables

y=\sqrt{n(-f''(x0)+\varepsilon)}(x-x0).

Remember

f''(x0)<0

so we can take the square root of its negation.

If we divide both sides of the above inequality by

nf(x0)
e\sqrt{
2\pi
n(-f''(x0))
}

and take the limit we get:

\limn

b
\intenf(x)dx
a
nf(x0)
e
\sqrt{2\pi
n(-f''(x0))
} \ge \lim_ \frac \int_^ e^ \, dy \, \cdot \sqrt = \sqrt

since this is true for arbitrary

\varepsilon

we get the lower bound:

\limn

b
\intenf(x)dx
a
nf(x0)
e
\sqrt{2\pi
n(-f''(x0))
} \ge 1

Note that this proof works also when

a=-infty

or

b=infty

(or both).

Upper bound: The proof is similar to that of the lower bound but there are a few inconveniences. Again we start by picking an

\varepsilon>0

but in order for the proof to work we need

\varepsilon

small enough so that

f''(x0)+\varepsilon<0.

Then, as above, by continuity of

f''

and Taylor's Theorem we can find

\delta>0

so that if

|x-x0|<\delta

, then

f(x)\lef(x0)+

1
2

(f''(x0)+

2.
\varepsilon)(x-x
0)

Lastly, by our assumptions (assuming

a,b

are finite) there exists an

η>0

such that if

|x-x0|\ge\delta

, then

f(x)\lef(x0)-η

.

Then we can calculate the following upper bound:

b
\begin{align} \int
a

enf(x)dx&\le

x0-\delta
\int
a

enf(x)dx+

x0+\delta
\int
x0-\delta

enf(x)dx+

b
\int
x0+\delta

enf(x)dx\\ &\le

n(f(x0))
(b-a)e

+

x0+\delta
\int
x0-\delta

endx\\ &\le

n(f(x0))
(b-a)e

+

nf(x0)
e
x0+\delta
\int
x0-\delta
n(f''(x0)+\varepsilon)(x-x
2
0)
2
e

dx\\ &\le

n(f(x0))
(b-a)e

+

nf(x0)
e
+infty
\int
-infty
n(f''(x0)+\varepsilon)(x-x
2
0)
2
e

dx\\ &\le

n(f(x0))
(b-a)e

+

nf(x0)
e\sqrt{
2\pi
n(-f''(x0)-\varepsilon)
}\end

If we divide both sides of the above inequality by

nf(x0)
e\sqrt{
2\pi
n(-f''(x0))
}

and take the limit we get:

\limn

b
\intenf(x)dx
a
nf(x0)
e
\sqrt{2\pi
n(-f''(x0))
} \le \lim_ (b-a) e^ \sqrt + \sqrt = \sqrt

Since

\varepsilon

is arbitrary we get the upper bound:

\limn

b
\intenf(x)dx
a
nf(x0)
e
\sqrt{2\pi
n(-f''(x0))
} \le 1

And combining this with the lower bound gives the result.

Note that the above proof obviously fails when

a=-infty

or

b=infty

(or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that for

n=1,

b
\int
a

enf(x)dx<infty,

and that the number

η

as above exists (note that this must be an assumption in the case when the interval

[a,b]

is infinite). The proof proceeds otherwise as above, but with a slightly different approximation of integrals:
x0-\delta
\int
a

enf(x)dx+

b
\int
x0+\delta

enf(x)dx\le

b
\int
a

ef(x)

(n-1)(f(x0))
e

dx=

(n-1)(f(x0))
e
b
\int
a

ef(x)dx.

When we divide by

nf(x0)
e\sqrt{
2\pi
n(-f''(x0))
},

we get for this term

(n-1)(f(x0))
e
b
\int
a
ef(x)dx
nf(x0)
e
\sqrt{2\pi
n(-f''(x0))
} = e^ \sqrt e^ \int_a^b e^ \, dx \sqrt

whose limit as

n\toinfty

is

0

. The rest of the proof (the analysis of the interesting term) proceeds as above.

The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function at

x0

must be a "true" maximum (the number

η>0

must exist). There is no need to demand that the integral is finite for

n=1

but it is enough to demand that the integral is finite for some

n=N.

This method relies on 4 basic concepts such as

1. Relative errorThe “approximation” in this method is related to the relative error and not the absolute error. Therefore, if we set

s=\sqrt{

2\pi
M\left|f''(x0)\right|
},

the integral can be written as

\begin{align}

b
\int
a

eMdx&=

Mf(x0)
se
1
s
b
\int
a
M(f(x)-f(x0))
e

dx\ &=

Mf(x0)
se
b-x0
s
\int
a-x0
s
M(f(sy+x0)-f(x0))
e

dy\end{align}

where

s

is a small number when

M

is a large number obviously and the relative error will be

\left|

b-x0
s
\int
a-x0
s
M(f(sy+x0)-f(x0))
e

dy-1\right|.

Now, let us separate this integral into two parts:

y\in[-Dy,Dy]

region and the rest.

2.

M(f(sy+x0)-f(x0))
e

\to

-\piy2
e
around the stationary point when

M

is large enough

Let’s look at the Taylor expansion of

M(f(x)-f(x0))

around x0 and translate x to y because we do the comparison in y-space, we will get

M(f(x)-f(x0))=

Mf''(x0)
2

s2y2+

Mf'''(x0)
6

s3y3+=-\piy2+O\left(

1
\sqrt{M
}\right).

Note that

f'(x0)=0

because

x0

is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of

\tfrac{1}{\sqrt{M}}

so that

\exp(M(f(x)-f(x0)))

will get closer to the Gaussian function as shown in figure. Besides,
infty
\int
-infty
-\piy2
e

dy=1.

3. The larger

M

is, the smaller range of

x

is related

Because we do the comparison in y-space,

y

is fixed in

y\in[-Dy,Dy]

which will cause

x\in[-sDy,sDy]

; however,

s

is inversely proportional to

\sqrt{M}

, the chosen region of

x

will be smaller when

M

is increased.

4. If the integral in Laplace's method converges, the contribution of the region which is not around the stationary point of the integration of its relative error will tend to zero as

M

grows.

Relying on the 3rd concept, even if we choose a very large Dy, sDy will finally be a very small number when

M

is increased to a huge number. Then, how can we guarantee the integral of the rest will tend to 0 when

M

is large enough?

The basic idea is to find a function

m(x)

such that

m(x)\gef(x)

and the integral of

eMm(x)

will tend to zero when

M

grows. Because the exponential function of

Mm(x)

will be always larger than zero as long as

m(x)

is a real number, and this exponential function is proportional to

m(x),

the integral of

eMf(x)

will tend to zero. For simplicity, choose

m(x)

as a tangent through the point

x=sDy

as shown in the figure:

If the interval of the integration of this method is finite, we will find that no matter

f(x)

is continue in the rest region, it will be always smaller than

m(x)

shown above when

M

is large enough. By the way, it will be proved later that the integral of

eMm(x)

will tend to zero when

M

is large enough.

If the interval of the integration of this method is infinite,

m(x)

and

f(x)

might always cross to each other. If so, we cannot guarantee that the integral of

eMf(x)

will tend to zero finally. For example, in the case of

f(x)=\tfrac{\sin(x)}{x},

infty
\int
0

eMf(x)dx

will always diverge. Therefore, we need to require that
infty
\int
d

eMf(x)dx

can converge for the infinite interval case. If so, this integral will tend to zero when

d

is large enough and we can choose this

d

as the cross of

m(x)

and

f(x).

You might ask why not choose

infty
\int
d

ef(x)dx

as a convergent integral? Let me use an example to show you the reason. Suppose the rest part of

f(x)

is

-lnx,

then

ef(x)=\tfrac{1}{x}

and its integral will diverge; however, when

M=2,

the integral of

eMf(x)=\tfrac{1}{x2}

converges. So, the integral of some functions will diverge when

M

is not a large number, but they will converge when

M

is large enough.

Based on these four concepts, we can derive the relative error of this method.

Other formulations

Laplace's approximation is sometimes written as

b
\int
a

h(x)eMdx\sqrt{

2\pi
M|g''(x0)|
} h(x_0) e^ \ \text M\to\infty

where

h

is positive.

Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in

g(x)

and what goes into

h(x).

[3]

First, use

x0=0

to denote the global maximum, which will simplify this derivation. We are interested in the relative error, written as

|R|

,
b
\int
a

h(x)eMdx=h(0)eMg(0)s

b/s
\underbrace{\int
a/s
h(x)
h(0)

eM\left[dy}1+R,

where
s\equiv\sqrt{2\pi
M\left|g''(0)\right|
}.

So, if we let

A\equiv

h(sy)
h(0)

eM\left[g(sy)-g(0)

and

A0\equiv

-\piy2
e
, we can get

\left|R\right|=\left|

b/s
\int
a/s

Ady

infty
-\int
-infty

A0dy\right|

since

infty
\int
-infty

A0dy=1

.

For the upper bound, note that

|A+B|\le|A|+|B|,

thus we can separate this integration into 5 parts with 3 different types (a), (b) and (c), respectively. Therefore,

|R|<\underbrace{\left|

infty
\int
Dy

A0dy

\right|}
(a1)

+\underbrace{\left|

b/s
\int
Dy

Ady

\right|}
(b1)

+\underbrace{\left|

Dy
\int
-Dy

\left(A-A0\right)dy\right|}(c)+\underbrace{\left|

-Dy
\int
a/s

Ady

\right|}
(b2)

+\underbrace{\left|

-Dy
\int
-infty

A0dy

\right|}
(a2)

where

(a1)

and

(a2)

are similar, let us just calculate

(a1)

and

(b1)

and

(b2)

are similar, too, I’ll just calculate

(b1)

.

For

(a1)

, after the translation of

z\equiv\piy2

, we can get

(a1)=\left|

1
2\sqrt{\pi
}\int_^ e^z^ dz\right| <\frac.

This means that as long as

Dy

is large enough, it will tend to zero.

For

(b1)

, we can get

(b1)\le\left|

b/s
\int\left[
Dy
h(sy)
h(0)

\right]maxeMm(sy)dy\right|

where

m(x)\geg(x)-g(0)asx\in[sDy,b]

and

h(x)

should have the same sign of

h(0)

during this region. Let us choose

m(x)

as the tangent across the point at

x=sDy

, i.e.

m(sy)=g(sDy)-g(0)+g'(sDy)\left(sy-sDy\right)

which is shown in the figure

From this figure you can find that when

s

or

Dy

gets smaller, the region satisfies the above inequality will get larger. Therefore, if we want to find a suitable

m(x)

to cover the whole

f(x)

during the interval of

(b1)

,

Dy

will have an upper limit. Besides, because the integration of

e-\alpha

is simple, let me use it to estimate the relative error contributed by this

(b1)

.

Based on Taylor expansion, we can get

\begin{align} M\left[g(sDy)-g(0)\right]&=M\left[

g''(0)
2
2
s+
y
g'''(\xi)
6
3
s
y

\right]&&as\xi\in[0,sDy]\\ &=-\pi

2
D+
y
3/2
(2\pi)
3
g'''(\xi)D
y
6\sqrt{M
3
2
|g''(0)|
},\end

and

\begin{align} Msg'(sDy)&=Ms\left(g''(0)sDy+

g'''(\zeta)
2
2\right)
s
y

&&as\zeta\in[0,sDy]\\ &=-2\piDy+\sqrt{

2
M
}\left(\frac
\right)^g(\zeta)D_y^2,\end

and then substitute them back into the calculation of

(b1)

; however, you can find that the remainders of these two expansions are both inversely proportional to the square root of

M

, let me drop them out to beautify the calculation. Keeping them is better, but it will make the formula uglier.

\begin{align} (b1)&\le\left|\left[

h(sy)
h(0)

\right]max

-\pi
2
D
y
e
b/s-Dy
\int
0
-2\piDyy
e

dy\right|\\ &\le\left|\left[

h(sy)
h(0)

\right]max

-\pi
2
D
y
e
1
2\piDy

\right|. \end{align}

Therefore, it will tend to zero when

Dy

gets larger, but don't forget that the upper bound of

Dy

should be considered during this calculation.

About the integration near

x=0

, we can also use Taylor's Theorem to calculate it. When

h'(0)\ne0

\begin{align} (c)&\le

Dy
\int
-Dy
-\piy2
e

\left|

sh'(\xi)
h(0)

y\right|dy\\ &<\sqrt{

2
\piM|g''(0)|
} \left| \frac \right|_\max \left(1-e^ \right)\end

and you can find that it is inversely proportional to the square root of

M

. In fact,

(c)

will have the same behave when

h(x)

is a constant.

Conclusively, the integral near the stationary point will get smaller as

\sqrt{M}

gets larger, and the rest parts will tend to zero as long as

Dy

is large enough; however, we need to remember that

Dy

has an upper limit which is decided by whether the function

m(x)

is always larger than

g(x)-g(0)

in the rest region. However, as long as we can find one

m(x)

satisfying this condition, the upper bound of

Dy

can be chosen as directly proportional to

\sqrt{M}

since

m(x)

is a tangent across the point of

g(x)-g(0)

at

x=sDy

. So, the bigger

M

is, the bigger

Dy

can be.

In the multivariate case, where

x

is a

d

-dimensional vector and

f(x)

is a scalar function of

x

, Laplace's approximation is usually written as:

\inth(x)eMdx\left(

2\pi
M

\right)d/2

Mf(x0)
h(x
0)e
1/2
\left|-H(f)(x
0)\right|

asM\toinfty

where

H(f)(x0)

is the Hessian matrix of

f

evaluated at

x0

and where

||

denotes matrix determinant. Analogously to the univariate case, the Hessian is required to be negative-definite.[4]

By the way, although

x

denotes a

d

-dimensional vector, the term

dx

denotes an infinitesimal volume here, i.e.

dx:=dx1dx2 … dxd

.

Steepest descent extension

See main article: Method of steepest descent.

In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of

f

vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents).

The appropriate formulation for the complex z-plane is

b
\int
a

eMdz\sqrt{

2\pi
-Mf''(z0)
}e^ \text M\to\infty.

for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions).

Further generalizations

An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.

Given a contour C in the complex sphere, a function

f

defined on that contour and a special point, such as infinity, a holomorphic function M is sought away from C, with prescribed jump across C, and with a given normalization at infinity. If

f

and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.

An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.

The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).

The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.

Median-point approximation generalization

In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density

eM.

Denoting the cumulative distribution

F(x)

, if there is a diffeomorphic Gaussian distribution with density
-g
-\gamma
2
y2
e

the norm is given by

\sqrt{2\pi\gamma-1

}e^

and the corresponding diffeomorphism is

y(x)=1
\sqrt{\gamma
}\Phi^,

where

\Phi

denotes cumulative standard normal distribution function.

In general, any distribution diffeomorphic to the Gaussian distribution has density

-g
-\gamma
2
y2(x)
e

y'(x)

and the median-point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values of

\gamma

and

g

.

The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context of partition function evaluation for a system of interacting fermions.[5]

Complex integrals

For complex integrals in the form:

1
2\pii
c+iinfty
\int
c-iinfty

g(s)estds

with

t\gg1,

we make the substitution t = iu and the change of variable

s=c+ix

to get the bilateral Laplace transform:
1
2\pi
infty
\int
-infty

g(c+ix)e-uxeicudx.

We then split g(c + ix) in its real and complex part, after which we recover u = t/i. This is useful for inverse Laplace transforms, the Perron formula and complex integration.

Example: Stirling's approximation

Laplace's method can be used to derive Stirling's approximation

N!\sqrt{2\piN}(

N
e

)N

for a large integer N.

From the definition of the Gamma function, we have

N!=

infty
\Gamma(N+1)=\int
0

e-xxNdx.

Now we change variables, letting

x=Nz

so that

dx=Ndz.

Plug these values back in to obtain

\begin{align} N!&=

infty
\int
0

e-Nz(Nz)NNdz\\ &=NN+1

infty
\int
0

e-NzzNdz\\ &=NN+1

infty
\int
0

e-NzeNlndz\\ &=NN+1

infty
\int
0

eN(lndz. \end{align}

This integral has the form necessary for Laplace's method with

f(z)=ln{z}-z

which is twice-differentiable:

f'(z)=

1
z

-1,

f''(z)=-

1
z2

.

The maximum of

f(z)

lies at z0 = 1, and the second derivative of

f(z)

has the value −1 at this point. Therefore, we obtain

N!NN+1\sqrt{

2\pi
N
} e^=\sqrt N^N e^.

See also

References

Notes and References

  1. Luke . Tierney . Joseph B. . Kadane . Accurate Approximations for Posterior Moments and Marginal Densities . J. Amer. Statist. Assoc. . 1986 . 81 . 393 . 82–86 . 10.1080/01621459.1986.10478240 .
  2. Book: M. Antónia . Amaral Turkman . Carlos Daniel . Paulino . Peter . Müller . Computational Bayesian Statistics: An Introduction . Cambridge University Press . 2019 . 978-1-108-70374-1 . Methods Based on Analytic Approximations . 150–171 .
  3. Book: Butler, Ronald W . 2007 . Saddlepoint approximations and applications . Cambridge University Press . 978-0-521-87250-8.
  4. Book: MacKay, David J. C. . Information Theory, Inference and Learning Algorithms . Cambridge University Press . September 2003 . Cambridge . 9780521642989.
  5. Makogon . D. . Morais Smith . C. . 2022-05-03 . Median-point approximation and its application for the study of fermionic systems . Physical Review B . 105 . 17 . 174505 . 10.1103/PhysRevB.105.174505. 2022PhRvB.105q4505M . 1874/423769 . 203591796 . free .