In real analysis, Fermat's theorem (also known as interior extremum theorem) is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point (the function's derivative is zero at that point). Fermat's theorem is named after a French mathematician Pierre de Fermat.
By using Fermat's theorem, the potential extrema of a function
\displaystylef
\displaystylef'
\displaystylef'
One way to state Fermat's theorem is that, if a function has a local extremum at some point and is differentiable there, then the function's derivative at that point must be zero. In precise mathematical language:
Let
f\colon(a,b) → R
x0\in(a,b)
f
f
\displaystylex0
f'(x0)=0
Another way to understand the theorem is via the contrapositive statement: if the derivative of a function at any point is not zero, then there is not a local extremum at that point. Formally:
If
f
x0\in(a,b)
f'(x0) ≠ 0
x0
f
The global extrema of a function f on a domain A occur only at boundaries, non-differentiable points, and stationary points.If
x0
x0
x0
x0
In higher dimensions, exactly the same statement holds; however, the proof is slightly more complicated. The complication is that in 1 dimension, one can either move left or right from a point, while in higher dimensions, one can move in many directions. Thus, if the derivative does not vanish, one must argue that there is some direction in which the function increases – and thus in the opposite direction the function decreases. This is the only change to the proof or the analysis.
The statement can also be extended to differentiable manifolds. If
f:M\toR
M
f
df
Fermat's theorem is central to the calculus method of determining maxima and minima: in one dimension, one can find extrema by simply computing the stationary points (by computing the zeros of the derivative), the non-differentiable points, and the boundary points, and then investigating this set to determine the extrema.
One can do this either by evaluating the function at each point and taking the maximum, or by analyzing the derivatives further, using the first derivative test, the second derivative test, or the higher-order derivative test.
a+bx,
f(x0)+f'(x0)(x-x0).
x0,
x0,
x0
x0,
x0.
More precisely, the intuition can be stated as: if the derivative is positive, there is some point to the right of
x0
x0
x0.
The intuition is based on the behavior of polynomial functions. Assume that function f has a maximum at x0, the reasoning being similar for a function minimum. If
\displaystylex0\in(a,b)
\displaystylex0
\displaystylex0
\displaystylef'
\displaystylex0
\displaystylef'
\displaystylef'(x)=0
\displaystylex0
The theorem (and its proof below) is more general than the intuition in that it does not require the function to be differentiable over a neighbourhood around
\displaystylex0
Suppose that f is differentiable at
x0\in(a,b),
K>0,
x0
x0
x0
x0,
x0,
The schematic of the proof is:
x0
x0,
x0.
Formally, by the definition of derivative,
f'(x0)=K
\lim\varepsilon
f(x0+\varepsilon)-f(x0) | |
\varepsilon |
=K.
\varepsilon
\varepsilon0
K/2,
(x0-\varepsilon0,x0+\varepsilon0)
f(x0+\varepsilon)-f(x0) | |
\varepsilon |
>K/2;
\varepsilon>0,
f(x0+\varepsilon)>f(x0)+(K/2)\varepsilon>f(x0),
f(x0),
\varepsilon<0,
f(x0+\varepsilon)<f(x0)+(K/2)\varepsilon<f(x0),
f(x0).
Thus
x0
Alternatively, one can start by assuming that
\displaystylex0
Suppose that
\displaystylex0
\displaystylex0
\delta>0
(x0-\delta,x0+\delta)\subset(a,b)
f(x0)\gef(x)
x
\displaystyle|x-x0|<\delta
h\in(0,\delta)
f(x0+h)-f(x0) | |
h |
\le0.
Since the limit of this ratio as
\displaystyleh
\displaystylef'(x0)
f'(x0)\le0
h\in(-\delta,0)
f(x0+h)-f(x0) | |
h |
\ge0
but again the limit as
\displaystyleh
\displaystylef'(x0)
f'(x0)\ge0
Hence we conclude that
\displaystylef'(x0)=0.
A subtle misconception that is often held in the context of Fermat's theorem is to assume that it makes a stronger statement about local behavior than it does. Notably, Fermat's theorem does not say that functions (monotonically) "increase up to" or "decrease down from" a local maximum. This is very similar to the misconception that a limit means "monotonically getting closer to a point". For "well-behaved functions" (which here means continuously differentiable), some intuitions hold, but in general functions may be ill-behaved, as illustrated below. The moral is that derivatives determine infinitesimal behavior, and that continuous derivatives determine local behavior.
\left(C1\right)
x0
f'(x0)>0
x0,
If
f'(x0)=K>0
f\inC1,
\varepsilon0>0
f'(x)>K/2
x\in(x0-\varepsilon0,x0+\varepsilon0)
K/2,
However, in the general statement of Fermat's theorem, where one is only given that the derivative at
x0
x0
x0
Conversely, if the derivative of f at a point is zero (
x0
x3
x4
-x4
x2\sin(1/x)
One can analyze the infinitesimal behavior via the second derivative test and higher-order derivative test, if the function is differentiable enough, and if the first non-vanishing derivative at
x0
f(k)(x0) ≠ 0
f(k)
f\inCk
f(k)(x0)(x-
k, | |
x | |
0) |
The function
\sin(1/x)
-1
1
f(x)=(1+\sin(1/x))x2
2x2
f(0)=0
2x2
Continuing in this vein, one may define
g(x)=(2+\sin(1/x))x2
x2
3x2
x=0
This pathology can be understood because, while the function is everywhere differentiable, it is not continuously differentiable: the limit of
g'(x)
x\to0
\left(C1\right)