In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator that is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.
If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ).
Let
\vec{X}=X1,X2,...,Xn
f(x:\theta)
\theta\in\Omega
Y=u(\vec{X})
\{fY(y:\theta):\theta\in\Omega\}
\varphi:\operatorname{E}[\varphi(Y)]=\theta
\varphi(Y)
By the Rao–Blackwell theorem, if
Z
\varphi(Y):=\operatorname{E}[Z\midY]
Z
Now we show that this function is unique. Suppose
W
\psi(Y):=\operatorname{E}[W\midY]
W
\operatorname{E}[\varphi(Y)-\psi(Y)]=0,\theta\in\Omega.
Since
\{fY(y:\theta):\theta\in\Omega\}
\operatorname{E}[\varphi(Y)-\psi(Y)]=0\implies\varphi(y)-\psi(y)=0,\theta\in\Omega
and therefore the function
\varphi
\varphi(Y)
An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[1] Let
X1,\ldots,Xn
X\simU((1-k)\theta,(1+k)\theta),
\operatorname{E}[X]=\theta
k\in(0,1)
\theta
X1
\theta
X1
T=\left(X(1),X(n)\right)
\theta
X(1)=miniXi
X(n)=maxiXi
\hat{\theta}RB=\operatorname{E}\theta[X1\midX(1),X(]=
X(1)+X(n) | |
2. |
However, the following unbiased estimator can be shown to have lower variance:
\hat{\theta}LV=
1 | |||||||||
|
⋅
(1-k)X(1)+(1+k)X(n) | |
2. |
And in fact, it could be even further improved when using the following estimator:
\hat{\theta} | ||||
|
\left[1-
| |||||
|
\right]
X(n) | |
1+k |
The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.[2]