In probability theory and statistics, the chi-squared distribution (also chi-square or
\chi2
k
k
The chi-squared distribution
2 | |
\chi | |
k |
X\sim
2 | |
\chi | |
k |
X\simGamma(\alpha=
k | |
2 |
,\theta=2)
\alpha
\theta
X\simW1(1,k)
The scaled chi-squared distribution
s2
2 | |
\chi | |
k |
X\sims2
2 | |
\chi | |
k |
X\simGamma(\alpha=
k | |
2 |
,\theta=2s2)
X\sim
2,k) | |
W | |
1(s |
The chi-squared distribution is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals.[1] [2] [3] This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.
The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in finding the confidence interval for estimating the population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such as Friedman's analysis of variance by ranks.
If are independent, standard normal random variables, then the sum of their squares,
Q =
k | |
\sum | |
i=1 |
2, | |
Z | |
i |
Q \sim \chi2(k) or Q \sim \chi
2 | |
k. |
The chi-squared distribution has one parameter: a positive integer that specifies the number of degrees of freedom (the number of random variables being summed, Zi s).
The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as the normal distribution and the exponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others:
It is also a component of the definition of the t-distribution and the F-distribution used in t-tests, analysis of variance, and regression analysis.
The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the t-statistic in a t-test. For these hypothesis tests, as the sample size,, increases, the sampling distribution of the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such as) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a standard normal distribution. So wherever a normal distribution could be used for a hypothesis test, a chi-squared distribution could be used.
Suppose that
Z
0
1
Z\simN(0,1)
Q=Z2
Q
2 | |
Q \sim \chi | |
1 |
An additional reason that the chi-squared distribution is widely used is that it turns up as the large sample distribution of generalized likelihood ratio tests (LRT).[4] LRTs have several desirable properties; in particular, simple LRTs commonly provide the highest power to reject the null hypothesis (Neyman–Pearson lemma) and this leads also to optimality properties of generalised LRTs. However, the normal and chi-squared approximations are only valid asymptotically. For this reason, it is preferable to use the t distribution rather than the normal approximation or the chi-squared approximation for a small sample size. Similarly, in analyses of contingency tables, the chi-squared approximation will be poor for a small sample size, and it is preferable to use Fisher's exact test. Ramsey shows that the exact binomial test is always more powerful than the normal approximation.[5]
Lancaster shows the connections among the binomial, normal, and chi-squared distributions, as follows. De Moivre and Laplace established that a binomial distribution could be approximated by a normal distribution. Specifically they showed the asymptotic normality of the random variable
\chi={m-Np\over\sqrt{Npq}}
where
m
N
p
q=1-p
Squaring both sides of the equation gives
Using
N=Np+N(1-p)
N=m+(N-m)
q=1-p
The expression on the right is of the form that Karl Pearson would generalize to the form
where
= Pearson's cumulative test statistic, which asymptotically approaches a
\chi2
i
i
i
pi
In the case of a binomial outcome (flipping a coin), the binomial distribution may be approximated by a normal distribution (for sufficiently large
n
The probability density function (pdf) of the chi-squared distribution is
f(x;k)= \begin{cases} \dfrac{xk/2e-x/2
k
For derivations of the pdf in the cases of one, two and
k
Its cumulative distribution function is:
F(x;k)=
| ||||||||
|
=P\left(
k | , | |
2 |
x | |
2 |
\right),
\gamma(s,t)
In a special case of
k=2
F(x;2)=1-e-x/2
f(x;2)= | 1 |
2 |
e-x/2
F(x;k)
k
Tables of the chi-squared cumulative distribution function are widely available and the function is included in many spreadsheets and all statistical packages.
Letting
z\equivx/k
0<z<1
The tail bound for the cases when
z>1
1-F(zk;k)\leq(ze1-z)k/2.
For another approximation for the CDF modeled after the cube of a Gaussian, see under Noncentral chi-squared distribution.
See main article: Cochran's theorem. The following is a special case of Cochran's theorem.
Theorem. If
Z1,...,Zn
n(Z | |
\sum | |
t |
-\barZ)2\sim
2 | |
\chi | |
n-1 |
\barZ=
1 | |
n |
n | |
\sum | |
t=1 |
Zt.
Proof. Let
Z\siml{N}(\bar0,11)
n
\barZ
n(Z | |
\sum | |
t-\bar |
Z)2~=~
n | |
\sum | |
t=1 |
2 | |
Z | |
t |
-n\barZ2~=~Z\top[11
\top]Z | ||||
-{
|
~=:~Z\topMZ
11
\bar1
M
b | |||||
|
0
n-1
b2,...,bn
b1
1
Q:=(b1,...,bn)
X:=Q\topZ\siml{N}(\bar0,Q\top11Q)=l{N}(\bar0,11)
n(Z | |
\sum | |
t-\bar |
Z)2~=~Z\topMZ~=~X\topQ\topMQX~=~
2 | |
X | |
n |
~\sim~
2 | |
\chi | |
n-1 |
,
It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, if
Xi,i=\overline{1,n}
ki
i=\overline{1,n}
Y=X1+ … +Xn
k1+ … +kn
The sample mean of
n
k
\alpha
\theta
\overlineX=
1 | |
n |
n | |
\sum | |
i=1 |
Xi\sim\operatorname{Gamma}\left(\alpha=nk/2,\theta=2/n\right) whereXi\sim\chi2(k)
Asymptotically, given that for a scale parameter
\alpha
\mu=\alpha ⋅ \theta
\sigma2=\alpha\theta2
Note that we would have obtained the same result invoking instead the central limit theorem, noting that for each chi-squared variable of degree
k
k
2k
\overline{X}
\sigma2=
2k | |
n |
The differential entropy is given by
h=
infty | |
\int | |
0 |
f(x;k)lnf(x;k)dx =
k | |
2 |
+ln\left[2\Gamma\left(
k | |
2 |
\right)\right]+\left(1-
k | |
2 |
\right)\psi\left(
k | |
2 |
\right),
\psi(x)
The chi-squared distribution is the maximum entropy probability distribution for a random variate
X
\operatorname{E}(X)=k
\operatorname{E}(ln(X))=\psi(k/2)+ln(2)
The moments about zero of a chi-squared distribution with
k
\operatorname{E}(Xm)=k(k+2)(k+4) … (k+2m-2)=2m
| |||||
|
.
The cumulants are readily obtained by a power series expansion of the logarithm of the characteristic function:
\kappan=2n-1(n-1)!k
The chi-squared distribution exhibits strong concentration around its mean. The standard Laurent-Massart[9] bounds are:
\operatorname{P}(X-k\ge2\sqrt{kx}+2x)\le\exp(-x)
\operatorname{P}(k-X\ge2\sqrt{kx})\le\exp(-x)
v\simN(0,1)n
\Rn
n
n
n1/2
\alpha
(0,1/2)
By the central limit theorem, because the chi-squared distribution is the sum of
k
k
k>50
X\sim\chi2(k)
k
(X-k)/\sqrt{2k}
\sqrt{8/k}
12/k
The sampling distribution of
ln(\chi2)
\chi2
Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are:
X\sim\chi2(k)
\sqrt{2X}
\sqrt{2k-1}
X\sim\chi2(k)
\sqrt[3]{X/k}
1- | 2 |
9k |
2 | |
9k |
.
k(1- | 2 |
9k |
)3
k\toinfty
2 | |
(\chi | |
k-k)/\sqrt{2k} |
~\xrightarrow{d} N(0,1)
2 | |
\chi | |
k |
\sim
2 | |
{\chi'} | |
k(0) |
λ=0
Y\simF(\nu1,\nu2)
X=
\lim | |
\nu2\toinfty |
\nu1Y
2 | |
\chi | |
\nu1 |
Y\simF(1,\nu2)
X=
\lim | |
\nu2\toinfty |
Y
2 | |
\chi | |
1 |
\|\boldsymbol{N}i=1,\ldots,k(0,1)\|2\sim
2 | |
\chi | |
k |
X\sim
2 | |
\chi | |
\nu |
c>0
cX\sim\Gamma(k=\nu/2,\theta=2c)
X\sim
2 | |
\chi | |
k |
\sqrt{X}\sim\chik
X\sim
2 | |
\chi | |
2 |
X\sim\operatorname{Exp}(1/2)
X\sim
2 | |
\chi | |
2k |
X\sim\operatorname{Erlang}(k,1/2)
X\sim\operatorname{Erlang}(k,λ)
2λX\sim
2 | |
\chi | |
2k |
X\sim\operatorname{Rayleigh}(1)
X2\sim
2 | |
\chi | |
2 |
X\sim\operatorname{Maxwell}(1)
X2\sim
2 | |
\chi | |
3 |
X\sim
2 | |
\chi | |
\nu |
\tfrac{1}{X}\sim
2 | |
\operatorname{Inv-}\chi | |
\nu |
X\sim
2 | |
\chi | |
\nu1 |
Y\sim
2 | |
\chi | |
\nu2 |
\tfrac{X}{X+Y}\sim\operatorname{Beta}(\tfrac{\nu1}{2},\tfrac{\nu2}{2})
X\sim\operatorname{U}(0,1)
-2log(X)\sim
2 | |
\chi | |
2 |
Xi\sim\operatorname{Laplace}(\mu,\beta)
n | |
\sum | |
i=1 |
2|Xi-\mu| | |
\beta |
\sim
2 | |
\chi | |
2n |
Xi
\mu,\alpha,\beta
n | |
\sum | |
i=1 |
| |||||||||
\alpha |
\sim
2 | |
\chi | |
2n/\beta |
A chi-squared variable with
k
k
If
Y
k
\mu
k
C
X=(Y-\mu)TC-1(Y-\mu)
k
The sum of squares of statistically independent unit-variance Gaussian variables which do not have mean zero yields a generalization of the chi-squared distribution called the noncentral chi-squared distribution.
If
Y
k
A
k x k
k-n
YTAY
k-n
If
\Sigma
p x p
X\simN(0,\Sigma)
w
p
X
w1+ … +wp=1
wi\geq0,i=1,\ldots,p,
1 | |||||||||||||
|
\sim
2. | |
\chi | |
1 |
The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular,
Y
Y\simF(k1,k2)
Y=
{X1 | |
/{k |
1}}{{X2}/{k2}}
X1\sim
2 | |
\chi | |
k1 |
X2\sim
2 | |
\chi | |
k2 |
X1\sim
2 | |
\chi | |
k1 |
X2\sim
2 | |
\chi | |
k2 |
X1+X2\sim
2 | |
\chi | |
k1+k2 |
X1
X2
X1+X2
The chi-squared distribution is obtained as the sum of the squares of independent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below.
If
X1,\ldots,Xn
a1,\ldots,an\inR>0
n | |
X=\sum | |
i=1 |
aiXi
See main article: Noncentral chi-squared distribution. The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance and nonzero means.
See main article: Generalized chi-squared distribution. The generalized chi-squared distribution is obtained from the quadratic form where is a zero-mean Gaussian vector having an arbitrary covariance matrix, and is an arbitrary matrix.
The chi-squared distribution
X\sim
2 | |
\chi | |
k |
X\sim\Gamma\left(
k | |||
|
X\sim\Gamma\left(
k | |
2,2 |
\right)
Because the exponential distribution is also a special case of the gamma distribution, we also have that if
X\sim
2 | |
\chi | |
2 |
X\sim\operatorname{Exp}\left(
1 | |
2\right) |
The Erlang distribution is also a special case of the gamma distribution and thus we also have that if
X
2 | |
\sim\chi | |
k |
k
X
k/2
1/2
The chi-squared distribution has numerous applications in inferential statistics, for instance in chi-squared tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student's t-distribution. It enters all analysis of variance problems via its role in the F-distribution, which is the distribution of the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom.
Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample.
X1,...,Xn
N(\mu,\sigma2)
n(X | |
\sum | |
i |
-
2 | |
\overline{X | |
i}) |
\sim\sigma2
2 | |
\chi | |
n-1 |
\overline{Xi}=
1 | |
n |
n | |
\sum | |
i=1 |
Xi
Xi\simN(\mui,
2 | |
\sigma | |
i), |
i=1,\ldots,k
Name | Statistic | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
chi-squared distribution |
\right)2 | |||||||||||||
\right)2 | ||||||||||||||
\right)2} | ||||||||||||||
\right)2} |
The -value is the probability of observing a test statistic at least as extreme in a chi-squared distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom (df) gives the probability of having obtained a value less extreme than this point, subtracting the CDF value from 1 gives the p-value. A low p-value, below the chosen significance level, indicates statistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results.
The table below gives a number of p-values matching to
\chi2
Degrees of freedom (df) | \chi2 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0.004 | 0.02 | 0.06 | 0.15 | 0.46 | 1.07 | 1.64 | 2.71 | 3.84 | 6.63 | 10.83 | |
2 | 0.10 | 0.21 | 0.45 | 0.71 | 1.39 | 2.41 | 3.22 | 4.61 | 5.99 | 9.21 | 13.82 | |
3 | 0.35 | 0.58 | 1.01 | 1.42 | 2.37 | 3.66 | 4.64 | 6.25 | 7.81 | 11.34 | 16.27 | |
4 | 0.71 | 1.06 | 1.65 | 2.20 | 3.36 | 4.88 | 5.99 | 7.78 | 9.49 | 13.28 | 18.47 | |
5 | 1.14 | 1.61 | 2.34 | 3.00 | 4.35 | 6.06 | 7.29 | 9.24 | 11.07 | 15.09 | 20.52 | |
6 | 1.63 | 2.20 | 3.07 | 3.83 | 5.35 | 7.23 | 8.56 | 10.64 | 12.59 | 16.81 | 22.46 | |
7 | 2.17 | 2.83 | 3.82 | 4.67 | 6.35 | 8.38 | 9.80 | 12.02 | 14.07 | 18.48 | 24.32 | |
8 | 2.73 | 3.49 | 4.59 | 5.53 | 7.34 | 9.52 | 11.03 | 13.36 | 15.51 | 20.09 | 26.12 | |
9 | 3.32 | 4.17 | 5.38 | 6.39 | 8.34 | 10.66 | 12.24 | 14.68 | 16.92 | 21.67 | 27.88 | |
10 | 3.94 | 4.87 | 6.18 | 7.27 | 9.34 | 11.78 | 13.44 | 15.99 | 18.31 | 23.21 | 29.59 | |
p-value (probability) | 0.95 | 0.90 | 0.80 | 0.70 | 0.50 | 0.30 | 0.20 | 0.10 | 0.05 | 0.01 | 0.001 |
These values can be calculated evaluating the quantile function (also known as "inverse CDF" or "ICDF") of the chi-squared distribution;[18] e. g., the ICDF for and yields as in the table above, noticing that is the p-value from the table.
This distribution was first described by the German geodesist and statistician Friedrich Robert Helmert in papers of 1875–6,[19] where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as the Helmert'sche ("Helmertian") or "Helmert distribution".
The distribution was independently rediscovered by the English mathematician Karl Pearson in the context of goodness of fit, for which he developed his Pearson's chi-squared test, published in 1900, with computed table of values published in, collected in .The name "chi-square" ultimately derives from Pearson's shorthand for the exponent in a multivariate normal distribution with the Greek letter Chi, writing for what would appear in modern notation as (Σ being the covariance matrix).[20] The idea of a family of "chi-squared distributions", however, is not due to Pearson but arose as a further development due to Fisher in the 1920s.
(0,infty)
f(x)=
2\beta\alpha/2x\alpha-1\exp(-\betax2+\gammax) | |||||||
|
\right)}}
\Psi(\alpha,z)={}1\Psi
|
\right)\\(1,0)\end{matrix};z\right)