Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to test whether a sample came from a given reference probability distribution (one-sample K–S test), or to test whether two samples came from the same distribution (two-sample K–S test). Intuitively, the test provides a method to qualitatively answer the question "How likely is it that we would see a collection of samples like this if they were drawn from that probability distribution?" or, in the second case, "How likely is it that we would see two sets of samples like this if they were drawn from the same (but unknown) probability distribution?".It is named after Andrey Kolmogorov and Nikolai Smirnov.
The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In the one-sample case, the distribution considered under the null hypothesis may be continuous (see Section 2), purely discrete or mixed (see Section 2.2). In the two-sample case (see Section 3), the distribution considered under the null hypothesis is a continuous distribution but is otherwise unrestricted. However, the two sample test can also be performed under more general conditions that allow for discontinuity, heterogeneity and dependence across samples.[1]
The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic (see Test with estimated parameters). Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test.[2] However, these other tests have their own disadvantages. For instance the Shapiro–Wilk test is known not to work well in samples with many identical values.
The empirical distribution function Fn for n independent and identically distributed (i.i.d.) ordered observations Xi is defined as
Fn(x)=
numberof(elementsinthesample\leqx) | = | |
n |
1 | |
n |
n | |
\sum | |
i=1 |
1(-infty,x](Xi),
where
1(-infty,x](Xi)
Xi\lex
The Kolmogorov–Smirnov statistic for a given cumulative distribution function F(x) is
Dn=\supx|Fn(x)-F(x)|
where supx is the supremum of the set of distances. Intuitively, the statistic takes the largest absolute difference between the two distribution functions across all x values.
By the Glivenko–Cantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely in the limit when
n
In practice, the statistic requires a relatively large number of data points (in comparison to other goodness of fit criteria such as the Anderson–Darling test statistic) to properly reject the null hypothesis.
The Kolmogorov distribution is the distribution of the random variable
K=\supt\in[0,1]|B(t)|
where B(t) is the Brownian bridge. The cumulative distribution function of K is given by[3]
\operatorname{Pr}(K\leq
infty | |
x)=1-2\sum | |
k=1 |
(-1)k-1
-2k2x2 | ||
e | = |
\sqrt{2\pi | |
\vartheta01(z=0;\tau=2ix2/\pi)
Under null hypothesis that the sample comes from the hypothesized distribution F(x),
\sqrt{n}Dn\xrightarrow{n\toinfty}\supt|B(F(t))|
in distribution, where B(t) is the Brownian bridge. If F is continuous then under the null hypothesis
\sqrt{n}Dn
The accuracy of this limit as an approximation to the exact cdf of
K
n
n=1000
0.9~\%
2.6~\%
n=100
7~\%
n=10
x
x+ | 1 |
6\sqrt{n |
in the argument of the Jacobi theta function reduces these errors to
0.003~\%
0.027\%
0.27~\%
The goodness-of-fit test or the Kolmogorov–Smirnov test can be constructed by using the critical values of the Kolmogorov distribution. This test is asymptotically valid when
n\toinfty.
\alpha
\sqrt{n}Dn>K\alpha,
where Kα is found from
\operatorname{Pr}(K\leqK\alpha)=1-\alpha.
The asymptotic power of this test is 1.
Fast and accurate algorithms to compute the cdf
\operatorname{Pr}(Dn\leqx)
n
x
If either the form or the parameters of F(x) are determined from the data Xi the critical values determined in this way are invalid. In such cases, Monte Carlo or other methods may be required, but tables have been prepared for some cases. Details for the required modifications to the test statistic and for the critical values for the normal distribution and the exponential distribution have been published,[11] and later publications also include the Gumbel distribution.[12] The Lilliefors test represents a special case of this for the normal distribution. The logarithm transformation may help to overcome cases where the Kolmogorov test data does not seem to fit the assumption that it came from the normal distribution.
Using estimated parameters, the question arises which estimation method should be used. Usually this would be the maximum likelihood method, but e.g. for the normal distribution MLE has a large bias error on sigma. Using a moment fit or KS minimization instead has a large impact on the critical values, and also some impact on test power. If we need to decide for Student-T data with df = 2 via KS test whether the data could be normal or not, then a ML estimate based on H0 (data is normal, so using the standard deviation for scale) would give much larger KS distance, than a fit with minimum KS. In this case we should reject H0, which is often the case with MLE, because the sample standard deviation might be very large for T-2 data, but with KS minimization we may get still a too low KS to reject H0. In the Student-T case, a modified KS test with KS estimate instead of MLE, makes the KS test indeed slightly worse. However, in other cases, such a modified KS test leads to slightly better test power.
Under the assumption that
F(x)
Dn=\supx|Fn(x)-F(x)|=\sup0
-1 | |
|F | |
n(F |
(t))-F(F-1(t))|.
From the right-continuity of
F(x)
F(F-1(t))\geqt
F-1(F(x))\leqx
Dn
F(x)
Dn
F(x)
disc_ks_test
, mixed_ks_test
and cont_ks_test
compute also the KS test statistic and p-values for purely discrete, mixed or continuous null distributions and arbitrary sample sizes. The KS test and its p-values for discrete null distributions and small sample sizes are also computed in [13] as part of the dgof package of the R language. Major statistical packages among which SAS PROC NPAR1WAY
,[14] Stata ksmirnov
[15] implement the KS test under the assumption that F(x)
The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is
Dn,m=\supx|F1,n(x)-F2,m(x)|,
where
F1,n
F2,m
\sup
For large samples, the null hypothesis is rejected at level
\alpha
Dn,m>c(\alpha)\sqrt{
n+m | |
n ⋅ m |
Where
n
m
c({\alpha})
\alpha
\alpha | 0.20 | 0.15 | 0.10 | 0.05 | 0.025 | 0.01 | 0.005 | 0.001 | |
c({\alpha}) | 1.073 | 1.138 | 1.224 | 1.358 | 1.48 | 1.628 | 1.731 | 1.949 |
and in general[19] by
c\left(\alpha\right)=\sqrt{-ln\left(\tfrac{\alpha}{2}\right) ⋅ \tfrac{1}{2}},
so that the condition reads
Dn,m>\sqrt{-ln\left(\tfrac{\alpha}{2}\right) ⋅ \tfrac{1+\tfrac{m}{n}}{2m}}.
Here, again, the larger the sample sizes, the more sensitive the minimal bound: For a given ratio of sample sizes (e.g.
m=n
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal). Again, tables of critical values have been published. A shortcoming of the univariate Kolmogorov–Smirnov test is that it is not very powerful because it is devised to be sensitive against all possible types of differences between two distribution functions. Some argue[20] [21] that the Cucconi test, originally proposed for simultaneously comparing location and scale, can be much more powerful than the Kolmogorov–Smirnov test when comparing two distribution functions.
Two-sample KS tests have been applied in economics to detect asymmetric effects and to study natural experiments.[22]
See main article: article and Dvoretzky–Kiefer–Wolfowitz inequality.
While the Kolmogorov–Smirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probability 1 − α.
A distribution-free multivariate Kolmogorov–Smirnov goodness of fit test has been proposed by Justel, Peña and Zamar (1997).[23] The test uses a statistic which is built using Rosenblatt's transformation, and an algorithm is developed to compute it in the bivariate case. An approximate test that can be easily computed in any dimension is also presented.
The Kolmogorov–Smirnov test statistic needs to be modified if a similar test is to be applied to multivariate data. This is not straightforward because the maximum difference between two joint cumulative distribution functions is not generally the same as the maximum difference of any of the complementary distribution functions. Thus the maximum difference will differ depending on which of
\Pr(X<x\landY<y)
\Pr(X<x\landY>y)
One approach to generalizing the Kolmogorov–Smirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting KS statistics. In d dimensions, there are 2d − 1 such orderings. One such variation is due to Peacock[24] (see also Gosset[25] for a 3D version)and another to Fasano and Franceschini[26] (see Lopes et al. for a comparison and computational details).[27] Critical values for the test statistic can be obtained by simulations, but depend on the dependence structure in the joint distribution.
In one dimension, the Kolmogorov–Smirnov statistic is identical to the so-called star discrepancy D, so another native KS extension to higher dimensions would be simply to use D also for higher dimensions. Unfortunately, the star discrepancy is hard to calculate in high dimensions.
In 2021 the functional form of the multivariate KS test statistic was proposed, which simplified the problem of estimating the tail probabilities of the multivariate KS test statistic, which is needed for the statistical test. For the multivariate case, if Fi is the ith continuous marginal from a probability distribution with k variables, then
\sqrt{n}Dn\xrightarrow{n\toinfty}max1\le\supt|B(Fi(t))|
The Kolmogorov–Smirnov test is implemented in many software programs. Most of these implement both the one and two sampled test.