In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample used in the test. For example, if the probability of error
Perror
e-n
n
\beta
Formally, the error exponent of a test is defined as the limiting value of the ratio of the negative logarithm of the error probability to the sample size for large sample sizes:
\limn
-lnPerror | |
n |
Consider a binary hypothesis testing problem in which observations are modeled as independent and identically distributed random variables under each hypothesis. Let
Y1,Y2,\ldots,Yn
f0
Yi
H0
f1
Yi
H1
In this case there are two possible error events. Error of type 1, also called false positive, occurs when the null hypothesis is true and it is wrongly rejected. Error of type 2, also called false negative, occurs when the alternate hypothesis is true and null hypothesis is not rejected. The probability of type 1 error is denoted
P(error\midH0)
P(error\midH1)
In the Neyman–Pearson version of binary hypothesis testing, one is interested in minimizing the probability of type 2 error
P(error\midH1)
P(error\midH0)
\alpha
n
\limn
-lnP(error\midH1) | |
n |
=D(f0\parallelf1)
D(f0\parallelf1)
In the Bayesian version of binary hypothesis testing one is interested in minimizing the average error probability under both hypothesis, assuming a prior probability of occurrence on each hypothesis. Let
\pi0
H0
Pave=\pi0P(error\midH0)+(1-\pi0)P(error\midH1)
\limn
-lnPave | |
n |
=C(f0,f1)
C(f0,f1)
C(f0,f1)=maxλ\left[-ln\int
λ | |
(f | |
0(x)) |
(1-λ) | |
(f | |
1(x)) |
dx\right]