Stationary phase approximation explained

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.

This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.

Formula

Letting

\Sigma

denote the set of critical points of the function

f

(i.e. points where

\nablaf=0

), under the assumption that

g

is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e.

\det(Hess(f(x0)))0

for

x0\in\Sigma

) we have the following asymptotic formula, as

k\toinfty

:
\int
Rn

g(x)eikf(x)

dx=\sum
x0\in\Sigma
ikf(x0)
e

|\det({Hess

}(f(x_0)))|^e^(2\pi/k)^g(x_0)+o(k^)

Here

Hess(f)

denotes the Hessian of

f

, and

sgn(Hess(f))

denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.

For

n=1

, this reduces to:
ikf(x)
\int
Rg(x)e
dx=\sum
x0\in\Sigma
ikf(x0)+sign(f''(x0))i\pi/4
g(x\left(
0)e
2\pi
k|f''(x0)|

\right)1/2+o(k-1/2)

In this case the assumptions on

f

reduce to all the critical points being non-degenerate.

This is just the Wick-rotated version of the formula for the method of steepest descent.

An example

Consider a function

f(x,t)=

1
2\pi

\intRF(\omega)eid\omega

.

The phase term in this function,

\phi=k(\omega)x-\omegat

, is stationary when
d
d\omega

en{}\left(k(\omega)x-\omegat\right)ose{}=0

or equivalently,

dk(\omega)
d\omega
|
\omega=\omega0

=

t
x
.

Solutions to this equation yield dominant frequencies

\omega0

for some

x

and

t

. If we expand

\phi

as a Taylor series about

\omega0

and neglect terms of order higher than
2
(\omega-\omega
0)
, we have

\phi=\left[k(\omega0)x-\omega0t\right]+

1
2

xk''(\omega0)(\omega-

2
\omega
0)

+

where

k''

denotes the second derivative of

k

. When

x

is relatively large, even a small difference

(\omega-\omega0)

will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,

\intR

1icx2
2
e

dx=\sqrt{

2i\pi
c
}=\sqrte^.

f(x,t)

1
2\pi
i\left[k(\omega0)x-\omega0t\right]
e

\left|F(\omega0)\right|\intR

1ixk''(\omega0)(\omega-
2
\omega
0)
2
e

d\omega

.

This integrates to

f(x,t)

\left|F(\omega0)\right|\sqrt{
2\pi
2\pi
x\left|k''(\omega0)\right|
} \cos\left[k(\omega_0) x - \omega_0 t \pm \frac{\pi}{4}\right].

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by

2
(x
1

+

2
x
2

++

2)
x
j

-

2
(x
j+1

+

2
x
j+2

++

2)
x
n
.

The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval and quickly tending to 0 outside it. Take

g(x)=\prodih(xi)

,

then Fubini's theorem reduces I(k) to a product of integrals over the real line like

J(k)=\inth(x)eidx

with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function).

One-dimensional case

The essential statement is this one:

1
\int
-1
ikx2
e

dx=\sqrt{

\pi
k
} e^ + \mathcal O \mathopen\left(\frac\right)\mathclose.

In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range

[-infty,infty]

(for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say,

[1,infty]

.[1]

This is the model for all one-dimensional integrals

I(k)

with

f

having a single non-degenerate critical point at which

f

has second derivative

>0

. In fact the model case has second derivative 2 at 0. In order to scale using

k

, observe that replacing

k

by

ck

where

c

is constant is the same as scaling

x

by

\sqrt{c}

. It follows that for general values of

f''(0)>0

, the factor

\sqrt{\pi/k}

becomes
\sqrt{2\pi
kf''(0)
}.

For

f''(0)<0

one uses the complex conjugate formula, as mentioned before.

Lower-order terms

As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved

f

.

See also

References

Notes and References

  1. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119 or Jean Dieudonné, Calcul Infinitésimal, p.135.