In mathematics, the mean value theorem (or Lagrange's mean value theorem) states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.
A special case of this theorem for inverse interpolation of the sine was first described by Parameshvara (1380–1460), from the Kerala School of Astronomy and Mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II.[1] A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem, and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823.[2] Many variations of this theorem have been proved since then.[3] [4]
Let
f:[a,b]\to\R
c
(a,b)
f'(c)= | f(b)-f(a) |
b-a |
.
The mean value theorem is a generalization of Rolle's theorem, which assumes
f(a)=f(b)
The mean value theorem is still valid in a slightly more general setting. One only needs to assume that
f:[a,b]\to\R
[a,b]
x
(a,b)
\limh\to
f(x+h)-f(x) | |
h |
exists as a finite number or equals
infty
-infty
f'(x)
x\mapstox1/3
The expression gives the slope of the line joining the points
(a,f(a))
(b,f(b))
f
f'(x)
(x,f(x))
Define
g(x)=f(x)-rx
r
f
[a,b]
(a,b)
g
r
g
\begin{align} g(a)=g(b)&\ifff(a)-ra=f(b)-rb\\ &\iffr(b-a)=f(b)-f(a)\\ &\iffr=
f(b)-f(a) | |
b-a |
. \end{align}
By Rolle's theorem, since
g
g(a)=g(b)
c
(a,b)
g'(c)=0
g(x)=f(x)-rx
\begin{align} &g'(x)=f'(x)-r\\ &g'(c)=0\\ &g'(c)=f'(c)-r=0\\ & ⇒ f'(c)=r=
f(b)-f(a) | |
b-a |
\end{align}
Theorem 1: Assume that
f
I
f
I
f
Proof: Assume the derivative of
f
I
(a,b)
I
c
(a,b)
0=f'(c)= | f(b)-f(a) |
b-a |
.
This implies that . Thus,
f
I
I
Remarks:
f
I
I
f
f'(x)=g'(x)
x
(a,b)
f-g
f=g+c
c
(a,b)
Proof: Let
F(x)=f(x)-g(x)
F'(x)=f'(x)-g'(x)=0
(a,b)
F(x)=f(x)-g(x)
c
f=g+c
Theorem 3: If
F
f
I
f
I
F(x)+c
c
Proof: It directly follows from the theorem 2 above.
Cauchy's mean value theorem, also known as the extended mean value theorem, is a generalization of the mean value theorem.[5] It states: if the functions
f
g
[a,b]
(a,b)
c\in(a,b)
(f(b)-f(a))g'(c)=(g(b)-g(a))f'(c).
Of course, if
g(a) ≠ g(b)
g'(c) ≠ 0
f'(c) | = | |
g'(c) |
f(b)-f(a) | |
g(b)-g(a) |
.
Geometrically, this means that there is some tangent to the graph of the curve[6]
\begin{cases}[a,b]\to\R2\\t\mapsto(f(t),g(t))\end{cases}
which is parallel to the line defined by the points
(f(a),g(a))
(f(b),g(b))
(f(a),g(a))
(f(b),g(b))
c
f'(c)=g'(c)=0
t\mapsto\left(t3,1-t2\right),
which on the interval
[-1,1]
(-1,0)
(1,0)
t=0
Cauchy's mean value theorem can be used to prove L'Hôpital's rule. The mean value theorem is the special case of Cauchy's mean value theorem when
g(t)=t
The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem.
g(a) ≠ g(b)
h(x)=f(x)-rg(x)
r
h(a)=h(b)
\begin{align}h(a)=h(b)&\ifff(a)-rg(a)=f(b)-rg(b)\ &\iffr(g(b)-g(a))=f(b)-f(a)\ &\iffr=
f(b)-f(a) | |
g(b)-g(a) |
.\end{align}
f
g
[a,b]
(a,b)
h
h
c
(a,b)
h'(c)=0
h
0=h'(c)=f'(c)-rg'(c)=f'(c)-\left(
f(b)-f(a) | |
g(b)-g(a) |
\right)g'(c),
f'(c)=
f(b)-f(a) | |
g(b)-g(a) |
g'(c).
g(a)=g(b)
g
c
(a,b)
g'(c)=0
c
The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem.
Let
G
\Rn
f:G\to\R
x,y\inG
x,y
G
g(t)=f((1-t)x+ty)
g
g(1)-g(0)=g'(c)
for some
c
g(1)=f(y)
g(0)=f(x)
g'(c)
f(y)-f(x)=\nablaf((1-c)x+cy) ⋅ (y-x)
where
\nabla
⋅
n=1
l|f(y)-f(x)r|\lel|\nablaf((1-c)x+cy)r| l|y-xr|.
In particular, when
G
f
f
As an application of the above, we prove that
f
G
f
x0\inG
g(x)=f(x)-f(x0)
g(x)=0
x\inG
E=\{x\inG:g(x)=0\}
E
G
x\inE
|g(y)|=|g(y)-g(x)|\le(0)|y-x|=0
for every
y
x
G
G
E=G
The above arguments are made in a coordinate-free manner; hence, they generalize to the case when
G
There is no exact analog of the mean value theorem for vector-valued functions (see below). However, there is an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case:
Jean Dieudonné in his classic treatise Foundations of Modern Analysis discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in Analysis I uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable.
The reason why there is no analog of mean value equality is the following: If is a differentiable function (where is open) and if, is the line segment in question (lying inside), then one can apply the above parametrization procedure to each of the component functions of f (in the above notation set). In doing so one finds points on the line segment satisfying
fi(x+h)-fi(x)=\nablafi(x+tih) ⋅ h.
But generally there will not be a single point on the line segment satisfying
fi(x+h)-fi(x)=\nablafi(x+t*h) ⋅ h.
for all simultaneously. For example, define:
\begin{cases} f:[0,2\pi]\to\R2\\ f(x)=(\cos(x),\sin(x)) \end{cases}
Then
f(2\pi)-f(0)=0\in\R2
f1'(x)=-\sin(x)
f2'(x)=\cos(x)
x
\left[0,2\pi\right]
The above theorem implies the following:
In fact, the above statement suffices for many applications and can be proved directly as follows. (We shall write
f
bf{f}
All conditions for the mean value theorem are necessary:
\boldsymbol{f(x)}
\boldsymbol{(a,b)}
\boldsymbol{f(x)}
\boldsymbol{[a,b]}
\boldsymbol{f(x)}
When one of the above conditions is not satisfied, the mean value theorem is not valid in general, and so it cannot be applied.
The necessity of the first condition can be seen by the counterexample where the function
f(x)=|x|
The necessity of the second condition can be seen by the counterexample where the functionsatisfies criteria 1 since
f'(x)=0
(0,1)
-1 ≠ 0=f'(x)
x\in(0,1)
c
The theorem is false if a differentiable function is complex-valued instead of real-valued. For example, if
f(x)=exi
x
f'(x)\ne0
x
Let f : [''a'', ''b''] → R be a continuous function. Then there exists c in (a, b) such that
b | |
\int | |
a |
f(x)dx=f(c)(b-a).
This follows at once from the fundamental theorem of calculus, together with the mean value theorem for derivatives. Since the mean value of f on [''a'', ''b''] is defined as
1 | |
b-a |
b | |
\int | |
a |
f(x)dx,
we can interpret the conclusion as f achieves its mean value at some c in (a, b).[7]
In general, if f : [''a'', ''b''] → R is continuous and g is an integrable function that does not change sign on [''a'', ''b''], then there exists c in (a, b) such that
b | |
\int | |
a |
f(x)g(x)dx=f(c)
b | |
\int | |
a |
g(x)dx.
There are various slightly different theorems called the second mean value theorem for definite integrals. A commonly found version is as follows:
If
G:[a,b]\toR
\varphi:[a,b]\toR
b | |
\int | |
a |
G(t)\varphi(t)dt=G(a+)
x | |
\int | |
a |
\varphi(t)dt.
Here
G(a+)
If
G:[a,b]\toR
\varphi:[a,b]\toR
b | |
\int | |
a |
G(t)\varphi(t)dt=G(a+)
x | |
\int | |
a |
\varphi(t)dt+G(b-)
b | |
\int | |
x |
\varphi(t)dt.
If the function
G
G
For example, consider the following 2-dimensional function defined on an
n
\begin{cases} G:[0,2\pi]n\to\R2\\ G(x1,...,xn)=\left(\sin(x1+ … +xn),\cos(x1+ … +xn)\right) \end{cases}
Then, by symmetry it is easy to see that the mean value of
G
\int | |
[0,2\pi]n |
G(x1,...,xn)dx1 … dxn=(0,0)
However, there is no point in which
G=(0,0)
|G|=1
Assume that
f,g,
h
(a,b)
[a,b]
D(x)=\begin{vmatrix} f(x)&g(x)&h(x)\\ f(a)&g(a)&h(a)\\ f(b)&g(b)&h(b) \end{vmatrix}
There exists
c\in(a,b)
D'(c)=0
Notice that
D'(x)=\begin{vmatrix} f'(x)&g'(x)&h'(x)\\ f(a)&g(a)&h(a)\\ f(b)&g(b)&h(b) \end{vmatrix}
h(x)=1
h(x)=1
g(x)=x
The proof of the generalization is quite simple: each of
D(a)
D(b)
D(a)=D(b)=0
c\in(a,b)
D'(c)=0
Let X and Y be non-negative random variables such that E[''X''] < E[''Y''] < ∞ and
X\leqstY
fZ(x)={\Pr(Y>x)-\Pr(X>x)\over{\rmE}[Y]-{\rmE}[X]}, x\geqslant0.
Let g be a measurable and differentiable function such that E[''g''(''X'')], E[''g''(''Y'')] < ∞, and let its derivative g′ be measurable and Riemann-integrable on the interval [''x'', ''y''] for all y ≥ x ≥ 0. Then, E[''g′''(''Z'')] is finite and[9]
{\rmE}[g(Y)]-{\rmE}[g(X)]={\rmE}[g'(Z)][{\rmE}(Y)-{\rmE}(X)].
See also: Voorhoeve index and Mean value problem. As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such:[10]
Let f : Ω → C be a holomorphic function on the open convex set Ω, and let a and b be distinct points in Ω. Then there exist points u, v on the interior of the line segment from a to b such that
\operatorname{Re}(f'(u))=\operatorname{Re}\left(
f(b)-f(a) | |
b-a |
\right),
\operatorname{Im}(f'(v))=\operatorname{Im}\left(
f(b)-f(a) | |
b-a |
\right).
Where Re is the real part and Im is the imaginary part of a complex-valued function.