In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
λg(z) | |
\int | |
Cf(z)e |
dz,
The method of steepest descent was first published by, who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by about hypergeometric functions. The contour of steepest descent has a minimax property, see . described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
The method of steepest descent is a method to approximate a complex integral of the formfor large
λ → infty
f(z)
g(z)
z
C
C'
\Im( ⋅ )
g(z)=\Re[g(z)]+i\Im[g(z)]
\Re( ⋅ )
The method is called the method of steepest descent because for analytic
g(z)
If
g(z)=X(z)+iY(z)
z=x+iy
Let and . If
M=\supx\Re(S(x))<infty,
where
\Re( ⋅ )
\intC\left|f(x)
λ0S(x) | |
e |
\right|dx<infty,
then the following estimate holds:[2]
\left|\intCf(x)eλdx\right|\leqslantconst ⋅ eλ, \forallλ\inR, λ\geqslantλ0.
\begin{align} \left|\intCf(x)eλdx\right|&\leqslant\intC|f(x)|\left|eλ\right|dx\ &\equiv\intC|f(x)|eλ\left|
λ0(S(x)-M) | |
e |
(λ-λ0)(S(x)-M) | |
e |
\right|dx\\ &\leqslant\intC|f(x)|eλ\left|
λ0(S(x)-M) | |
e |
\right|dx&&\left|
(λ-λ0)(S(x)-M) | |
e |
\right|\leqslant1\\ &=
-λ0M | |
\underbrace{e |
\intC\left|f(x)
λ0S(x) | |
e |
\right|dx}const ⋅ eλ. \end{align}
Let be a complex -dimensional vector, and
S''xx(x)\equiv\left(
\partial2S(x) | |
\partialxi\partialxj |
\right), 1\leqslanti,j\leqslantn,
denote the Hessian matrix for a function . If
\boldsymbol{\varphi}(x)=(\varphi1(x),\varphi2(x),\ldots,\varphik(x))
is a vector function, then its Jacobian matrix is defined as
\boldsymbol{\varphi}x'(x)\equiv\left(
\partial\varphii(x) | |
\partialxj |
\right), 1\leqslanti\leqslantk, 1\leqslantj\leqslantn.
A non-degenerate saddle point,, of a holomorphic function is a critical point of the function (i.e.,) where the function's Hessian matrix has a non-vanishing determinant (i.e.,
\detS''zz(z0) ≠ 0
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
The Morse lemma for real-valued functions generalizes as follows[3] for holomorphic functions: near a non-degenerate saddle point of a holomorphic function, there exist coordinates in terms of which is exactly quadratic. To make this precise, let be a holomorphic function with domain, and let in be a non-degenerate saddle point of, that is, and
\detS''zz(z0) ≠ 0
\forallw\inV: S(\boldsymbol{\varphi}(w))=S(z0)+
1 | |
2 |
n | |
\sum | |
j=1 |
\muj
2, | |
w | |
j |
\det\boldsymbol{\varphi}w'(0)=1,
Here, the are the eigenvalues of the matrix
Szz''(z0)
Assume
\Re(S(z))
max | |
z\inIx |
\Re(S(z))=\Re(S(x0))
\detS''xx(x0) ≠ 0
S''xx(x0)
| ||||
(-\mu | ||||
j) |
This statement is a special case of more general results presented in Fedoryuk (1987).[4]
Equation (8) can also be written as
where the branch of
\sqrt{\det\left(-Sxx''(x0)\right)}
is selected as follows
\begin{align} \left(\det\left(-Sxx''(x0)\right)
| ||||
\right) |
&=\exp\left(-iInd\left(-Sxx''(x0)\right)\right)
n | |
\prod | |
j=1 |
\left|\muj
| ||||
\right| |
,\\ Ind\left(-Sxx''(x0)\right)&=\tfrac{1}{2}
n | |
\sum | |
j=1 |
\arg(-\muj),&&|\arg(-\muj)|<\tfrac{\pi}{2}. \end{align}
Consider important special cases:
\Re(S(x))=0
signSxx''(x0)
Sxx''(x0)
If the function has multiple isolated non-degenerate saddle points, i.e.,
\nablaS\left(x(k)\right)=0, \detS''xx\left(x(k)\right) ≠ 0, x(k)\in
(k) | |
\Omega | |
x |
,
where
\left\{
(k) | |
\Omega | |
x |
\right
K | |
\} | |
k=1 |
is an open cover of, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions such that
\begin{align}
K | |
\sum | |
k=1 |
\rhok(x)&=1,&&\forallx\in\Omegax,\\ \rhok(x)&=0&&\forallx\in\Omegax\setminus
(k) | |
\Omega | |
x |
. \end{align}
Whence,
\int | |
Ix\subset\Omegax |
f(x)eλdx\equiv
K | |
\sum | |
k=1 |
\int | |
Ix\subset\Omegax |
\rhok(x)f(x)eλdx.
Therefore as we have:
K | |
\sum | |
k=1 |
\int | |
aneighborhoodofx(k) |
f(x)eλdx=\left(
2\pi | |
λ |
| ||||
\right) |
K | |
\sum | |
k=1 |
λS\left(x(k)\right) | |
e |
\left(\det\left(-Sxx''\left(x(k)\right)\right)
| ||||
\right) |
f\left(x(k)\right),
where equation (13) was utilized at the last stage, and the pre-exponential function at least must be continuous.
When and
\detS''zz(z0)=0
Calculating the asymptotic of
\intf(x)eλdx,
when is continuous, and has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function into one of the multitude of canonical representations. For further details see, e.g., and .
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., and/or are discontinuous or when an extremum of lies at the integration region's boundary, require special care (see, e.g., and).
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann - Hilbert factorization problems.
Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann - Hilbert problem to that of a simpler, explicitly solvable, Riemann - Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions.
\left|\arg\sqrt{-\muj}\right|\leqslant\tfrac{\pi}{4}.