Uniformly most powerful test explained

In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power

1-\beta

among all possible tests of a given size α. For example, according to the Neyman–Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses.

Setting

Let

X

denote a random vector (corresponding to the measurements), taken from a parametrized family of probability density functions or probability mass functions

f\theta(x)

, which depends on the unknown deterministic parameter

\theta\in\Theta

. The parameter space

\Theta

is partitioned into two disjoint sets

\Theta0

and

\Theta1

. Let

H0

denote the hypothesis that

\theta\in\Theta0

, and let

H1

denote the hypothesis that

\theta\in\Theta1

.The binary test of hypotheses is performed using a test function

\varphi(x)

with a reject region

R

(a subset of measurement space).

\varphi(x)=\begin{cases} 1&ifx\inR\\ 0&ifx\inRc \end{cases}

meaning that

H1

is in force if the measurement

X\inR

and that

H0

is in force if the measurement

X\inRc

.Note that

R\cupRc

is a disjoint covering of the measurement space.

Formal definition

A test function

\varphi(x)

is UMP of size

\alpha

if for any other test function

\varphi'(x)

satisfying
\sup
\theta\in\Theta0

\operatorname{E}[\varphi'(X)|\theta]=\alpha'\leq\alpha=\sup
\theta\in\Theta0

\operatorname{E}[\varphi(X)|\theta]

we have

\forall\theta\in\Theta1,\operatorname{E}[\varphi'(X)|\theta]=1-\beta'(\theta)\leq1-\beta(\theta)=\operatorname{E}[\varphi(X)|\theta].

The Karlin–Rubin theorem

The Karlin–Rubin theorem can be regarded as an extension of the Neyman–Pearson lemma for composite hypotheses.[1] Consider a scalar measurement having a probability density function parameterized by a scalar parameter θ, and define the likelihood ratio

l(x)=

f
\theta1

(x)/

f
\theta0

(x)

.If

l(x)

is monotone non-decreasing, in

x

, for any pair

\theta1\geq\theta0

(meaning that the greater

x

is, the more likely

H1

is), then the threshold test:

\varphi(x)=\begin{cases} 1&ifx>x0\\ 0&ifx<x0 \end{cases}

where

x0

is chosen such that
\operatorname{E}
\theta0

\varphi(X)=\alpha

is the UMP test of size α for testing

H0:\theta\leq\theta0vs.H1:\theta>\theta0.

Note that exactly the same test is also UMP for testing

H0:\theta=\theta0vs.H1:\theta>\theta0.

Important case: exponential family

Although the Karlin-Rubin theorem may seem weak because of its restriction to scalar parameter and scalar measurement, it turns out that there exist a host of problems for which the theorem holds. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with

f\theta(x)=g(\theta)h(x)\exp(η(\theta)T(x))

has a monotone non-decreasing likelihood ratio in the sufficient statistic

T(x)

, provided that

η(\theta)

is non-decreasing.

Example

Let

X=(X0,\ldots,XM-1)

denote i.i.d. normally distributed

N

-dimensional random vectors with mean

\thetam

and covariance matrix

R

. We then have

\begin{align} f\theta(X)={}&(2\pi)-MN/2|R|-M/2\exp\left\{-

1
2
M-1
\sum
n=0

(Xn-\thetam)TR-1(Xn-\thetam)\right\}\\[4pt] ={}&(2\pi)-MN/2|R|-M/2\exp\left\{-

1
2
M-1
\sum
n=0

\left(\theta2mTR-1m\right)\right\}\\[4pt] &\exp\left\{-

1
2
M-1
\sum
n=0
T
X
n

R-1Xn\right\}\exp\left\{\thetamTR-1

M-1
\sum
n=0

Xn\right\} \end{align}

which is exactly in the form of the exponential family shown in the previous section, with the sufficient statistic being

T(X)=mTR-1

M-1
\sum
n=0

Xn.

Thus, we conclude that the test

\varphi(T)=\begin{cases}1&T>t0\ 0&T<t0\end{cases}   

\operatorname{E}
\theta0

\varphi(T)=\alpha

is the UMP test of size

\alpha

for testing

H0:\theta\leqslant\theta0

vs.

H1:\theta>\theta0

Further discussion

Finally, we note that in general, UMP tests do not exist for vector parameters or for two-sided tests (a test in which one hypothesis lies on both sides of the alternative). The reason is that in these situations, the most powerful test of a given size for one possible value of the parameter (e.g. for

\theta1

where

\theta1>\theta0

) is different from the most powerful test of the same size for a different value of the parameter (e.g. for

\theta2

where

\theta2<\theta0

). As a result, no test is uniformly most powerful in these situations.

Further reading

Notes and References

  1. Casella, G.; Berger, R.L. (2008), Statistical Inference, Brooks/Cole. (Theorem 8.3.17)