In statistics, Cochran's theorem, devised by William G. Cochran,[1] is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.[2]
If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then
Ui=
Xi-\mu | |
\sigma |
is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:
\sumiQi=\sumjikUj
(i) | |
B | |
jk |
Uk=\sumjkUjUk\sumi
(i) | |
B | |
jk |
= \sumjkUjUk\deltajk=\sumj
2 | |
U | |
j |
B1+B2\ldots=I
n | |
\sum | |
i=1 |
| ||||
U | ||||
i=1 |
(here
\overline{X}
\sigma2
2 | |
\sum(X | |
i-\overline{X}+\overline{X}-\mu) |
and expand to give
2+\sum(\overline{X}-\mu) | |
\sum(X | |
i-\overline{X}) |
2+ 2\sum(X | |
i-\overline{X})(\overline{X}-\mu). |
The third term is zero because it is equal to a constant times
\sum(\overline{X}-Xi)=0,
and the second term has just n identical terms added together. Thus
2 | |
\sum(X | |
i-\mu) |
=
2+n(\overline{X}-\mu) | |
\sum(X | |
i-\overline{X}) |
2,
and hence
\sum\left( | Xi-\mu |
\sigma |
| ||||
\right) |
Now
B(2)=
Jn | |
n |
Jn
B(1)=
I | ||||
|
(1) | |
I | |
n=B |
+B(2)
Q1
B(1)
n-1
Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n - 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[3]
The result for the distributions is written symbolically as
2 | |
\sum\left(X | |
i-\overline{X}\right) |
\sim\sigma2
2 | |
\chi | |
n-1 |
.
n(\overline{X}-\mu)2\sim\sigma2
2 | |
\chi | |
1, |
Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by
n\left(\overline{X | |
-\mu\right) |
| ||||
2}\sim | |
\sum\left(X | |
i-\overline{X}\right) |
| ||||||||||
|
\simF1,n-1
where F1,n - 1 is the F-distribution with 1 and n - 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.
To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution
| ||||
\widehat{\sigma} |
2. | |
\sum\left(X | |
i-\overline{X}\right) |
Cochran's theorem shows that
n\widehat{\sigma | |
2}{\sigma |
2}\sim\chi
2 | |
n-1 |
and the properties of the chi-squared distribution show that
\begin{align} E\left(
n\widehat{\sigma | |
2}{\sigma |
2}\right)&=E
2 | |
\left(\chi | |
n-1 |
\right)\
n | |
\sigma2 |
E\left(\widehat{\sigma}2\right)&=(n-1)\\ E\left(\widehat{\sigma}2\right)&=
\sigma2(n-1) | |
n |
\end{align}
The following version is often seen when considering linear regression.[4] Suppose that
Y\sim
2I | |
N | |
n) |
In
A1,\ldots,Ak
kA | |
\sum | |
i=I |
n
ri=\operatorname{Rank}(Ai)
kr | |
\sum | |
i=n |
,
2\chi | |
Y | |
iY\sim\sigma |
2 | |
ri |
Ai
TA | |
Y | |
iY |
TA | |
Y | |
jY |
i ≠ j.
Let U1, ..., UN be i.i.d. standard normally distributed random variables, and
U=[U1,...,
T | |
U | |
N] |
B(1),B(2),\ldots,B(k)
B(i)
T | |
Q | |
i=U |
B(i)U
\sumiQi=UTU
Cochran's theorem states that the following are equivalent:
r1+ … +rk=N
Often it's stated as
\sumiAi=A
A
\sumiri=N
\sumiri=rank(A)
A=diag(IM,0)
Claim: Let
X
\Rn
Q,Q'
XTQX
XTQ'X
Q,Q'
Claim:
I=\sumiBi
Lemma: If
\sumiMi=I
Mi
Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle (
1\to2\to3\to1