Student's t-test is a statistical test used to test whether the difference between the response of two groups is statistically significant or not. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and is therefore a nuisance parameter). When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are significantly different. In many cases, a Z-test will yield very similar results to a t-test because the latter converges to the former as the size of the dataset increases.
The term "t-statistic" is abbreviated from "hypothesis test statistic".[1] In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert[2] [3] [4] and Lüroth.[5] [6] [7] The t-distribution also appeared in a more general form as Pearson type IV distribution in Karl Pearson's 1895 paper.[8] However, the t-distribution, also known as Student's t-distribution, gets its name from William Sealy Gosset, who first published it in English in 1908 in the scientific journal Biometrika using the pseudonym "Student"[9] because his employer preferred staff to use pen names when publishing scientific papers.[10] Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples for example, the chemical properties of barley with small sample sizes. Hence a second version of the etymology of the term Student is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material. Although it was William Gosset after whom the term "Student" is penned, it was actually through the work of Ronald Fisher that the distribution became well known as "Student's distribution"[11] and "Student's t-test".
Gosset devised the t-test as an economical way to monitor the quality of stout. The t-test work was submitted to and accepted in the journal Biometrika and published in 1908.[12]
Guinness had a policy of allowing technical staff leave for study (so-called "study leave"), which Gosset used during the first two terms of the 1906–1907 academic year in Professor Karl Pearson's Biometric Laboratory at University College London.[13] Gosset's identity was then known to fellow statisticians and to editor-in-chief Karl Pearson.[14]
A one-sample Student's t-test is a location test of whether the mean of a population has a value specified in a null hypothesis. In testing the null hypothesis that the population mean is equal to a specified value, one uses the statistic
t=
\bar{x | |
- |
\mu0}{s/\sqrt{n}},
where
\barx
\barx
By the central limit theorem, if the observations are independent and the second moment exists, then
t
A two-sample location test of the null hypothesis such that the means of two populations are equal. All such tests are usually called Student's t-tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t-test. These tests are often referred to as unpaired or independent samples t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping.[15]
Two-sample t-tests for a difference in means involve independent samples (unpaired samples) or paired samples. Paired t-tests are a form of blocking, and have greater power (probability of avoiding a type II error, also known as a false negative) than unpaired tests when the paired units are similar with respect to "noise factors" (see confounder) that are independent of membership in the two groups being compared.[16] In a different context, paired t-tests can be used to reduce the effects of confounding factors in an observational study.
The independent samples t-test is used when two separate sets of independent and identically distributed samples are obtained, and one variable from each of the two populations is compared. For example, suppose we are evaluating the effect of a medical treatment, and we enroll 100 subjects into our study, then randomly assign 50 subjects to the treatment group and 50 subjects to the control group. In this case, we have two independent samples and would use the unpaired form of the t-test.
See main article: Paired difference test.
Paired samples t-tests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice (a "repeated measures" t-test).
A typical example of the repeated measures t-test would be where subjects are tested prior to a treatment, say for high blood pressure, and the same subjects are tested again after treatment with a blood-pressure-lowering medication. By comparing the same patient's numbers before and after treatment, we are effectively using each patient as their own control. That way the correct rejection of the null hypothesis (here: of no difference made by the treatment) can become much more likely, with statistical power increasing simply because the random interpatient variation has now been eliminated. However, an increase of statistical power comes at a price: more tests are required, each subject having to be tested twice. Because half of the sample now depends on the other half, the paired version of Student's t-test has only degrees of freedom (with being the total number of observations). Pairs become individual test units, and the sample has to be doubled to achieve the same number of degrees of freedom. Normally, there are degrees of freedom (with being the total number of observations).[17]
A paired samples t-test based on a "matched-pairs sample" results from an unpaired sample that is subsequently used to form a paired sample, by using additional variables that were measured along with the variable of interest.[18] The matching is carried out by identifying pairs of values consisting of one observation from each of the two samples, where the pair is similar in terms of other measured variables. This approach is sometimes used in observational studies to reduce or eliminate the effects of confounding factors.
Paired samples t-tests are often referred to as "dependent samples t-tests".
Most test statistics have the form, where and are functions of the data.
may be sensitive to the alternative hypothesis (i.e., its magnitude tends to be larger when the alternative hypothesis is true), whereas is a scaling parameter that allows the distribution of to be determined.
As an example, in the one-sample t-test
t=
Z | |
s |
=
\bar{X | |
- |
\mu}{\hat\sigma/\sqrt{n}},
\bar{X}
\hat\sigma=\sqrt{
1 | |
n-1 |
\sumi(Xi-\barX)2}
The assumptions underlying a t-test in the simplest form above are that:
In the t-test comparing the means of two independent samples, the following assumptions should be met:
Most two-sample t-tests are robust to all but large deviations from the assumptions.[22]
For exactness, the t-test and Z-test require normality of the sample means, and the t-test additionally requires that the sample variance follows a scaled χ distribution, and that the sample mean and sample variance be statistically independent. Normality of the individual data values is not required if these conditions are met. By the central limit theorem, sample means of moderately large samples are often well-approximated by a normal distribution even if the data are not normally distributed. However, the sample size required for the sample means to converge to normality depends on the skewness of the distribution of the original data. The sample can vary from 30 to 100 or higher values depending on the skewness.[23] [24] F
For non-normal data, the distribution of the sample variance may deviate substantially from a χ distribution.
However, if the sample size is large, Slutsky's theorem implies that the distribution of the sample variance has little effect on the distribution of the test statistic. That is, as sample size
n
\sqrt{n}(\bar{X}-\mu)\xrightarrow{d}N(0,\sigma2)
s2\xrightarrow{p}\sigma2
\therefore
\sqrt{n | |
(\bar{X} |
-\mu)}{s}\xrightarrow{d}N(0,1)
Explicit expressions that can be used to carry out various t-tests are given below. In each case, the formula for a test statistic that either exactly follows or closely approximates a t-distribution under the null hypothesis is given. Also, the appropriate degrees of freedom are given in each case. Each of these statistics can be used to carry out either a one-tailed or two-tailed test.
Once the t value and degrees of freedom are determined, a p-value can be found using a table of values from Student's t-distribution. If the calculated p-value is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), then the null hypothesis is rejected in favor of the alternative hypothesis.
Suppose one is fitting the model
Y=\alpha+\betax+\varepsilon,
where is known, and are unknown, is a normally distributed random variable with mean 0 and unknown variance, and is the outcome of interest. We want to test the null hypothesis that the slope is equal to some specified value (often taken to be 0, in which case the null hypothesis is that and are uncorrelated).
Let
\begin{align} \hat\alpha,\hat\beta&=least-squaresestimators,\\ SE\hat\alpha,SE\hat\beta&=thestandarderrorsofleast-squaresestimators. \end{align}
Then
tscore=
\hat\beta-\beta0 | |
SE\hat\beta |
\siml{T}n-2
has a t-distribution with degrees of freedom if the null hypothesis is true. The standard error of the slope coefficient:
SE\hat\beta=
| ||||||||||||||||||
can be written in terms of the residuals. Let
\begin{align} \hat\varepsiloni&=yi-\hatyi=yi-(\hat\alpha+\hat\betaxi)=residuals=estimatederrors,\\ SSR&=
n | |
\sum | |
i=1 |
2 | |
{\hat\varepsilon | |
i} |
=sumofsquaresofresiduals. \end{align}
Then score is given by
tscore=
(\hat\beta-\beta0)\sqrt{n-2 | |
Another way to determine the score is
tscore=
r\sqrt{n-2 | |
where r is the Pearson correlation coefficient.
The score, intercept can be determined from the score, slope:
tscore,intercept=
\alpha | |
\beta |
tscore,slope | |||||||||
|
2}},
where is the sample variance.
Given two groups (1, 2), this test is only applicable when:
Violations of these assumptions are discussed below.
The statistic to test whether the means are different can be calculated as follows:
t=
\bar{X | |
1 |
-\bar{X}2}{sp\sqrt
2 | |
n |
where
sp=\sqrt{
| ||||||||||||||||
2 |
Here is the pooled standard deviation for, and and are the unbiased estimators of the population variance. The denominator of is the standard error of the difference between two means.
For significance testing, the degrees of freedom for this test is, where is sample size.
This test is used only when it can be assumed that the two distributions have the same variance (when this assumption is violated, see below). The previous formulae are a special case of the formulae below, one recovers them when both samples are equal in size: .
The statistic to test whether the means are different can be calculated as follows:
t=
\bar{X | |
1 |
-\bar{X}2}{sp ⋅ \sqrt{
1 | |
n1 |
+
1 | |
n2 |
where
sp=\sqrt{
| |||||||||||||||||||
n1+n2-2 |
is the pooled standard deviation of the two samples: it is defined in this way so that its square is an unbiased estimator of the common variance, whether or not the population means are the same. In these formulae, is the number of degrees of freedom for each group, and the total sample size minus two (that is,) is the total number of degrees of freedom, which is used in significance testing.
This test, also known as Welch's t-test, is used only when the two population variances are not assumed to be equal (the two sample sizes may or may not be equal) and hence must be estimated separately. The statistic to test whether the population means are different is calculated as
t=
\bar{X | |
1 |
-\bar{X}2}{s\bar\Delta
where
s\bar\Delta=\sqrt{
| |||||||
n1 |
+
| |||||||
n2 |
Here is the unbiased estimator of the variance of each of the two samples with = number of participants in group (= 1 or 2). In this case
(s\bar\Delta)2
d.f.=
| ||||||||||||||||||||
|
.
This is known as the Welch–Satterthwaite equation. The true distribution of the test statistic actually depends (slightly) on the two unknown population variances (see Behrens–Fisher problem).
The test[25] deals with the famous Behrens–Fisher problem, i.e., comparing the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.
The test is developed as an exact test that allows for unequal sample sizes and unequal variances of two populations. The exact property still holds even with small extremely small and unbalanced sample sizes (e.g.
n1=5,n2=50
The statistic to test whether the means are different can be calculated as follows:
Let
X=[X1,X2,\ldots,X
T | |
m] |
Y=[Y1,Y2,\ldots,Y
T | |
n] |
m\gen
N(\mu1,\sigma
2) | |
1 |
N(\mu2,\sigma
2) | |
2 |
Let
T) | |
(P | |
n x n |
n x n
1/\sqrt{n}
T) | |
(Q | |
n x m |
m x m
1/\sqrt{m}
Then
T) | |
Z:=(Q | |
n x m |
T) | |
X/\sqrt{m}-(P | |
n x n |
Y/\sqrt{n}
Z\simN((\mu1-\mu
T , | |
2,0,...,0) |
2/n)I | |
(\sigma | |
n). |
From the above distribution we see that
Z1=\barX-\barY=
1m\sum | |
i=1 |
m
n | ||||
X | ||||
|
Yj,
Z1-(\mu1-\mu2)\sim
2/n), | |
N(0,\sigma | |
2 |
| ||||||||||||||||
n-1 |
\sim
| x \left( | |||||||
n-1 |
| + | |||||||
m |
| |||||||
n |
\right)
Z1-(\mu1-\mu2)\perp
n | |
\sum | |
i=2 |
2 | |
Z | |
i. |
Te:=
Z1-(\mu1-\mu2) | ||||||||||||||
|
}\simtn-1.
This test is used when the samples are dependent; that is, when there is only one sample that has been tested twice (repeated measures) or when there are two samples that have been matched or "paired". This is an example of a paired difference test. The t statistic is calculated as
t=
\bar{X | |
D |
-\mu0}{sD/\sqrtn},
where
\bar{X}D
sD
|
|
Let denote a set obtained by drawing a random sample of six measurements:
A1=\{30.02, 29.99, 30.11, 29.97, 30.01, 29.99\}
and let denote a second set obtained similarly:
A2=\{29.89, 29.93, 29.72, 29.98, 30.02, 29.98\}
These could be, for example, the weights of screws that were manufactured by two different machines.
We will carry out tests of the null hypothesis that the means of the populations from which the two samples were taken are equal.
The difference between the two sample means, each denoted by, which appears in the numerator for all the two-sample testing approaches discussed above, is
\bar{X}1-\bar{X}2=0.095.
The sample standard deviations for the two samples are approximately 0.05 and 0.11, respectively. For such small samples, a test of equality between the two population variances would not be very powerful. Since the sample sizes are equal, the two forms of the two-sample t-test will perform similarly in this example.
If the approach for unequal variances (discussed above) is followed, the results are
\sqrt{ |
| ||||||
n1 |
+
| |||||||
n2 |
and the degrees of freedom
d.f. ≈ 7.031.
The test statistic is approximately 1.959, which gives a two-tailed test p-value of 0.09077.
If the approach for equal variances (discussed above) is followed, the results are
sp ≈ 0.08399
and the degrees of freedom
d.f.=10.
The test statistic is approximately equal to 1.959, which gives a two-tailed p-value of 0.07857.
The t-test provides an exact test for the equality of the means of two i.i.d. normal populations with unknown, but equal, variances. (Welch's t-test is a nearly exact test for the case where the data are normal but the variances may differ.) For moderately large samples and a one tailed test, the t-test is relatively robust to moderate violations of the normality assumption.[26] In large enough samples, the t-test asymptotically approaches the z-test, and becomes robust even to large deviations from normality.
If the data are substantially non-normal and the sample size is small, the t-test can give misleading results. See Location test for Gaussian scale mixture distributions for some theory related to one particular family of non-normal distributions.
When the normality assumption does not hold, a non-parametric alternative to the t-test may have better statistical power. However, when data are non-normal with differing variances between groups, a t-test may have better type-1 error control than some non-parametric alternatives.[27] Furthermore, non-parametric methods, such as the Mann-Whitney U test discussed below, typically do not test for a difference of means, so should be used carefully if a difference of means is of primary scientific interest.[19] For example, Mann-Whitney U test will keep the type 1 error at the desired level alpha if both groups have the same distribution. It will also have power in detecting an alternative by which group B has the same distribution as A but after some shift by a constant (in which case there would indeed be a difference in the means of the two groups). However, there could be cases where group A and B will have different distributions but with the same means (such as two distributions, one with positive skewness and the other with a negative one, but shifted so to have the same means). In such cases, MW could have more than alpha level power in rejecting the Null hypothesis but attributing the interpretation of difference in means to such a result would be incorrect.
In the presence of an outlier, the t-test is not robust. For example, for two independent samples when the data distributions are asymmetric (that is, the distributions are skewed) or the distributions have large tails, then the Wilcoxon rank-sum test (also known as the Mann–Whitney U test) can have three to four times higher power than the t-test.[26] [28] [29] The nonparametric counterpart to the paired samples t-test is the Wilcoxon signed-rank test for paired samples. For a discussion on choosing between the t-test and nonparametric alternatives, see Lumley, et al. (2002).
One-way analysis of variance (ANOVA) generalizes the two-sample t-test when the data belong to more than two groups.
When both paired observations and independent observations are present in the two sample design, assuming data are missing completely at random (MCAR), the paired observations or independent observations may be discarded in order to proceed with the standard tests above. Alternatively making use of all of the available data, assuming normality and MCAR, the generalized partially overlapping samples t-test could be used.[30]
See main article: Hotelling's T-squared distribution. A generalization of Student's t statistic, called Hotelling's t-squared statistic, allows for the testing of hypotheses on multiple (often correlated) measures within the same sample. For instance, a researcher might submit a number of subjects to a personality test consisting of multiple personality scales (e.g. the Minnesota Multiphasic Personality Inventory). Because measures of this type are usually positively correlated, it is not advisable to conduct separate univariate t-tests to test hypotheses, as these would neglect the covariance among measures and inflate the chance of falsely rejecting at least one hypothesis (Type I error). In this case a single multivariate test is preferable for hypothesis testing. Fisher's Method for combining multiple tests with alpha reduced for positive correlation among tests is one. Another is Hotelling's T statistic follows a T distribution. However, in practice the distribution is rarely used, since tabulated values for T are hard to find. Usually, T is converted instead to an F statistic.
For a one-sample multivariate test, the hypothesis is that the mean vector is equal to a given vector . The test statistic is Hotelling's t:
-1 | |
t | |
0})'{S} |
(\bar{x}-{\boldsymbol\mu0})
where is the sample size, is the vector of column means and is an sample covariance matrix.
For a two-sample multivariate test, the hypothesis is that the mean vectors of two samples are equal. The test statistic is Hotelling's two-sample t:
t2=
n1n2 | |
n1+n2 |
\left(\bar{x}1-\bar{x}2\right)'{Spooled
The two-sample t-test is a special case of simple linear regression as illustrated by the following example.
A clinical trial examines 6 patients given drug or placebo. Three (3) patients get 0 units of drug (the placebo group). Three (3) patients get 1 unit of drug (the active treatment group). At the end of treatment, the researchers measure the change from baseline in the number of words that each patient can recall in a memory test.
A table of the patients' word recall and drug dose values are shown below.
Patient | drug.dose | word.recall | |
---|---|---|---|
1 | 0 | 1 | |
2 | 0 | 2 | |
3 | 0 | 3 | |
4 | 1 | 5 | |
5 | 1 | 6 | |
6 | 1 | 7 |
Data and code are given for the analysis using the R programming language with the t.test
and lm
functions for the t-test and linear regression. Here are the same (fictitious) data above generated in R.
Perform the t-test. Notice that the assumption of equal variance, var.equal=T
, is required to make the analysis exactly equivalent to simple linear regression.
Running the R code gives the following results.
Perform a linear regression of the same data. Calculations may be performed using the R function lm
for a linear model.
The linear regression provides a table of coefficients and p-values.
Coefficient | Estimate | Std. Error | t value | P-value | |
---|---|---|---|---|---|
Intercept | 2 | 0.5774 | 3.464 | 0.02572 | |
drug.dose | 4 | 0.8165 | 4.899 | 0.000805 |
The table of coefficients gives the following results.
The coefficients for the linear regression specify the slope and intercept of the line that joins the two group means, as illustrated in the graph. The intercept is 2 and the slope is 4.
Compare the result from the linear regression to the result from the t-test.
This example shows that, for the special case of a simple linear regression where there is a single x-variable that has values 0 and 1, the t-test gives the same results as the linear regression. The relationship can also be shown algebraically.
Recognizing this relationship between the t-test and linear regression facilitates the use of multiple linear regression and multi-way analysis of variance. These alternatives to t-tests allow for the inclusion of additional explanatory variables that are associated with the response. Including such additional explanatory variables using regression or anova reduces the otherwise unexplained variance, and commonly yields greater power to detect differences than do two-sample t-tests.
Many spreadsheet programs and statistics packages, such as QtiPlot, LibreOffice Calc, Microsoft Excel, SAS, SPSS, Stata, DAP, gretl, R, Python, PSPP, Wolfram Mathematica, MATLAB and Minitab, include implementations of Student's t-test.