In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.
In contrast to the Banach space case, in which the invertibility of the derivative at a point is sufficient for a map to be locally invertible, the Nash–Moser theorem requires the derivative to be invertible in a neighborhood. The theorem is widely used to prove local existence for non-linear partial differential equations in spaces of smooth functions. It is particularly useful when the inverse to the derivative "loses" derivatives, and therefore the Banach space implicit function theorem cannot be used.
The Nash–Moser theorem traces back to, who proved the theorem in the special case of the isometric embedding problem. It is clear from his paper that his method can be generalized., for instance, showed that Nash's methods could be successfully applied to solve problems on periodic orbits in celestial mechanics in the KAM theory. However, it has proven quite difficult to find a suitable general formulation; there is, to date, no all-encompassing version; various versions due to Gromov, Hamilton, Hörmander, Saint-Raymond, Schwartz, and Sergeraert are given in the references below. That of Hamilton's, quoted below, is particularly widely cited.
This will be introduced in the original setting of the Nash–Moser theorem, that of the isometric embedding problem. Let
\Omega
f
P(f)
g
P(f)
fg
P(fg)=g
Following standard practice, one would expect to apply the Banach space inverse function theorem. So, for instance, one might expect to restrict to
C5(\Omega;RN)
f
However, there is a deep reason that such a formulation cannot work. The issue is that there is a second-order differential operator of
P(f)
f
f
RP(f)
f
f
In context, the upshot is that the inverse to the linearization of, even if it exists as a map, cannot be bounded between appropriate Banach spaces, and hence the Banach space implicit function theorem cannot be applied.
By exactly the same reasoning, one cannot directly apply the Banach space implicit function theorem even if one uses the Hölder spaces, the Sobolev spaces, or any of the spaces. In any of these settings, an inverse to the linearization of will fail to be bounded.
This is the problem of loss of derivatives. A very naive expectation is that, generally, if is an order k differential operator, then if is in then
f
This section only aims to describe an idea, and as such it is intentionally imprecise. For concreteness, suppose that P is an order-one differential operator on some function spaces, so that it defines a map for each k. Suppose that, at some Ck+1 function
f
Nash's solution is quite striking in its simplicity. Suppose that for each n>0 one has a smoothing operator θn which takes a Ck function, returns a smooth function, and approximates the identity when n is large. Then the "smoothed" Newton iterationtransparently does not encounter the same difficulty as the previous "unsmoothed" version, since it is an iteration in the space of smooth functions which never loses regularity. So one has a well-defined sequence of functions; the major surprise of Nash's approach is that this sequence actually converges to a function f∞ with . For many mathematicians, this is rather surprising, since the "fix" of throwing in a smoothing operator seems too superficial to overcome the deep problem in the standard Newton method. For instance, on this point Mikhael Gromov says
Remark. The true "smoothed Newton iteration" is a little more complicated than the above form, although there are a few inequivalent forms, depending on where one chooses to insert the smoothing operators. The primary difference is that one requires invertibility of DPf for an entire open neighborhood of choices of f, and then one uses the "true" Newton iteration, corresponding to (using single-variable notation)as opposed tothe latter of which reflects the forms given above. This is rather important, since the improved quadratic convergence of the "true" Newton iteration is significantly used to combat the error of "smoothing", in order to obtain convergence. Certain approaches, in particular Nash's and Hamilton's, follow the solution of an ordinary differential equation in function space rather than an iteration in function space; the relation of the latter to the former is essentially that of the solution of Euler's method to that of a differential equation.
The following statement appears in :Similarly, if each linearization is only injective, and a family of left inverses is smooth tame, then P is locally injective. And if each linearization is only surjective, and a family of right inverses is smooth tame, then P is locally surjective with a smooth tame right inverse.
A consists of the following data:
F
\| ⋅ \|n:F\to\R
f\inF.
f\inF
\|f\|n=0
n=0,1,2,\ldots
f=0
fj\inF
n=0,1,2,\ldots
\varepsilon>0
Nn,\varepsilon
j,k>Nn,\varepsilon
\|fj-fk\|n<\varepsilon,
f\inF
n,
B
L:F\to\Sigma(B)
M:\Sigma(B)\toF
M\circL:F\toF
r
b
n>b
Cn
f\inF,
\left\{xi\right\}\in\Sigma(B).
\Sigma(B)
B,
M
Cinfty(M)
\|f\|n
Cn
f
\|f\|n
Cn,\alpha
f
\alpha
\|f\|n
Wn,p
f
p
M
infty | |
C | |
0 |
(M),
M
V\toM
M
B
L1
L
Presented directly as above, the meaning and naturality of the "tame" condition is rather obscure. The situation is clarified if one re-considers the basic examples given above, in which the relevant "exponentially decreasing" sequences in Banach spaces arise from restriction of a Fourier transform. Recall that smoothness of a function on Euclidean space is directly related to the rate of decay of its Fourier transform. "Tameness" is thus seen as a condition which allows an abstraction of the idea of a "smoothing operator" on a function space. Given a Banach space
B
\Sigma(B)
B,
s:\R\to\R
(-infty,0),
(1,infty),
[0,1].
t
\thetat:\Sigma(B)\to\Sigma(B)
Let F and G be graded Fréchet spaces. Let U be an open subset of F, meaning that for each
f\inU
n\in\N
\varepsilon>0
\|f-f1\|<\varepsilon
f1
A smooth map
P:U\toG
k\in\N
DkP:U x F x … x F\toG
The fundamental example says that, on a compact smooth manifold, a nonlinear partial differential operator (possibly between sections of vector bundles over the manifold) is a smooth tame map; in this case, r can be taken to be the order of the operator.
Let S denote the family of inverse mappings
U x G\toF.
P(0)=0
ginfty
f(0)=0
P(f)=ginfty.