In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable.[1] [2] [3] [4] [5] That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.The adjective simple refers to the fact that the outcome variable is related to a single predictor.
It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to the correlation between and corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass of the data points.
Consider the model function
y=\alpha+\betax,
yi=\alpha+\betaxi+\varepsiloni.
This relationship between the true (but unobserved) underlying parameters and and the data points is called a linear regression model.
The goal is to find estimated values
\widehat\alpha
\widehat\beta
\widehat\varepsiloni
\alpha
\beta
\widehat\varepsiloni=yi-\alpha-\betaxi.
In other words,
\widehat\alpha
\widehat\beta
(\hat\alpha,\hat\beta)=\operatorname{argmin}\left(Q(\alpha,\beta)\right),
Q(\alpha,\beta)=
2 | |
\sum | |
i |
=
n | |
\sum | |
i=1 |
(yi-\alpha-\beta
2 . | |
x | |
i) |
By expanding to get a quadratic expression in
\alpha
\beta,
\widehat{\alpha}
\widehat{\beta}
Here we have introduced
The above equations are efficient to use if the mean of the x and y variables (
\bar{x}and\bar{y}
\widehat\alphaand\widehat\beta
\begin{bmatrix}n&
nx | |
\sum | |
i |
nx | |
\ \sum | |
i |
&
2 | |
\sum | |
i |
\end{bmatrix} \begin{bmatrix}\widehat\alpha\\ \widehat\beta\end{bmatrix} = \begin{bmatrix}
ny | |
\sum | |
i |
ny | |
\\ \sum | |
ix |
i\end{bmatrix}
The above system of linear equations may be solved directly, or stand-alone equations for
\widehat\alphaand\widehat\beta
\begin{align}& \widehat\alpha=
| ||||||||||||||||||||||||||||||||||
|
\ [5pt] \\& \widehat\beta =
| |||||||||||||||||||||
|
\ & \end{align}
The solution can be reformulated using elements of the covariance matrix:
where
Substituting the above expressions for
\widehat{\alpha}
\widehat{\beta}
y-\bar{y | |
This shows that is the slope of the regression line of the standardized data points (and that this line passes through the origin). Since
-1\leqrxy\leq1
Generalizing the
\barx
\overline{xy}=
1 | |
n |
n | |
\sum | |
i=1 |
xiyi.
This notation allows us a concise formula for :
rxy=
\overline{xy | |
- |
\bar{x}\bar{y}}{\sqrt{\left(\overline{x2}-\bar{x}2\right)\left(\overline{y2}-\bar{y}2\right)}}.
The coefficient of determination ("R squared") is equal to
2 | |
r | |
xy |
By multiplying all members of the summation in the numerator by :
\begin{align} | (xi-\bar{x |
)}{(x |
i-\bar{x})}=1\end{align}
\begin{align} \widehat\beta&=
| |||||||||
)(y |
i-\bar{y})}{
n | |
\sum | |
i=1 |
(xi-\bar{x})2}=
| |||||||||
) |
| ||||
i |
-\bar{x})}}{
n | |
\sum | |
i=1 |
(xi-\bar{x})2}=
n | |
\sum | |
i=1 |
(xi-\bar{x | |
) |
2}{
n | |
\sum | |
j=1 |
(xj-\bar{x})2}
(yi-\bar{y | |
)}{(x |
i-\bar{x})}\\[6pt] \end{align}
We can see that the slope (tangent of angle) of the regression line is the weighted average of
(yi-\bar{y | |
)}{(x |
i-\bar{x})}
(xi-\bar{x})2
\begin{align} \widehat\alpha&=\bar{y}-\widehat\beta\bar{x},\\[5pt] \end{align}
Given
\widehat\beta=\tan(\theta)=dy/dx → dy=dx x \widehat\beta
\theta
y\rm=\bar{y}-dx x \widehat\beta=\bar{y}-dy
In the above formulation, notice that each
xi
yi
xi
\varepsiloni
In this framing, when
xi
rxy
E(xi)=xi
Var(xi)=0
rxy
xi
xi
rxy
Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.
The estimators
\widehat{\alpha}
\widehat{\beta}
To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals as random variables drawn independently from some distribution with mean zero. In other words, for each value of, the corresponding value of is generated as a mean response plus an additional random variable called the error term, equal to zero on average. Under such interpretation, the least-squares estimators
\widehat\alpha
\widehat\beta
The formulas given in the previous section allow one to calculate the point estimates of and — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimators
\widehat{\alpha}
\widehat{\beta}
The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:
The latter case is justified by the central limit theorem.
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean and variance where is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals is distributed proportionally to with degrees of freedom, and independently from
\widehat{\beta}
t=
\widehat\beta-\beta | |
s\widehat\beta |
\sim tn,
where
s\widehat{\beta}=\sqrt{
| |||||||||||
i |
2
is the unbiased standard error estimator of the estimator
\widehat{\beta}
This -value has a Student's -distribution with degrees of freedom. Using it we can construct a confidence interval for :
\beta\in\left[\widehat\beta-s\widehat\beta
* | |
t | |
n-2 |
, \widehat\beta+s\widehat\beta
* | |
t | |
n-2 |
\right],
at confidence level, where
* | |
t | |
n-2 |
\scriptstyle\left(1 -
\gamma | |
2 |
\right)-th
Similarly, the confidence interval for the intercept coefficient is given by
\alpha\in\left[\widehat\alpha-s\widehat\alpha
* | |
t | |
n-2 |
, \widehat\alpha+s\widehat{\alpha}
* | |
t | |
n-2 |
\right],
at confidence level (1 − γ), where
s\widehat\alpha=
s | ||||
|
n | |
\sum | |
i=1 |
2} | |
x | |
i |
=\sqrt{
1 | |
n(n-2) |
n | |
\left(\sum | |
i=1 |
2 | |
\widehat{\varepsilon} | |
i |
\right)
| ||||||||||||||||
|
)2}}
The confidence intervals for and give us the general idea where these regression coefficients are most likely to be. For example, in the Okun's law regression shown here the point estimates are
\widehat{\alpha}=0.859, \widehat{\beta}=-1.817.
The 95% confidence intervals for these estimates are
\alpha\in\left[0.76,0.96\right], \beta\in\left[-2.06,-1.58\right].
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[10] that at confidence level (1 − γ) the confidence band has hyperbolic form given by the equation
(\alpha+\beta\xi)\in\left[\widehat{\alpha}+\widehat{\beta}\xi\pm
* | |
t | |
n-2 |
\sqrt{\left(
1 | |
n-2 |
2 | |
\sum\widehat{\varepsilon} | |
i |
\right) ⋅ \left(
1 | |
n |
+
(\xi-\bar{x | |
) |
2}{\sum(x | |
i |
-\bar{x})2}\right)}\right].
When the model assumed the intercept is fixed and equal to 0 (
\alpha=0
s\widehat{\beta}=\sqrt{
1 | |
n-1 |
| ||||||||||
i |
2
With:
\hat{\varepsilon}i=yi-\hatyi
The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*n−2 of Student's t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction is replaced with . When is large such a change does not alter the results appreciably.
This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.
Height (m), xi | 1.47 | 1.50 | 1.52 | 1.55 | 1.57 | 1.60 | 1.63 | 1.65 | 1.68 | 1.70 | 1.73 | 1.75 | 1.78 | 1.80 | 1.83 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mass (kg), yi | 52.21 | 53.12 | 54.48 | 55.84 | 57.20 | 58.57 | 59.93 | 61.29 | 63.11 | 64.47 | 66.28 | 68.10 | 69.92 | 72.19 | 74.46 |
i | xi | yi |
| xiyi |
| ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1.47 | 52.21 | 2.1609 | 76.7487 | 2725.8841 | ||||||||||||
2 | 1.50 | 53.12 | 2.2500 | 79.6800 | 2821.7344 | ||||||||||||
3 | 1.52 | 54.48 | 2.3104 | 82.8096 | 2968.0704 | ||||||||||||
4 | 1.55 | 55.84 | 2.4025 | 86.5520 | 3118.1056 | ||||||||||||
5 | 1.57 | 57.20 | 2.4649 | 89.8040 | 3271.8400 | ||||||||||||
6 | 1.60 | 58.57 | 2.5600 | 93.7120 | 3430.4449 | ||||||||||||
7 | 1.63 | 59.93 | 2.6569 | 97.6859 | 3591.6049 | ||||||||||||
8 | 1.65 | 61.29 | 2.7225 | 101.1285 | 3756.4641 | ||||||||||||
9 | 1.68 | 63.11 | 2.8224 | 106.0248 | 3982.8721 | ||||||||||||
10 | 1.70 | 64.47 | 2.8900 | 109.5990 | 4156.3809 | ||||||||||||
11 | 1.73 | 66.28 | 2.9929 | 114.6644 | 4393.0384 | ||||||||||||
12 | 1.75 | 68.10 | 3.0625 | 119.1750 | 4637.6100 | ||||||||||||
13 | 1.78 | 69.92 | 3.1684 | 124.4576 | 4888.8064 | ||||||||||||
14 | 1.80 | 72.19 | 3.2400 | 129.9420 | 5211.3961 | ||||||||||||
15 | 1.83 | 74.46 | 3.3489 | 136.2618 | 5544.2916 | ||||||||||||
\Sigma | 24.76 | 931.17 | 41.0532 | 1548.2453 | 58498.5439 |
\begin{align} Sx&=\sumxi=24.76, Sy=\sumyi=931.17,\\[5pt] Sxx&=\sum
2 | |
x | |
i |
=41.0532, Syy=\sum
2 | |
y | |
i |
=58498.5439,\\[5pt] Sxy&=\sumxiyi=1548.2453 \end{align}
These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.
\begin{align} \widehat\beta&=
nSxy-SxSy | ||||||||||||
|
=61.272\\[8pt] \widehat\alpha&=
1 | |
n |
Sy-\widehat{\beta}
1 | |
n |
Sx=-39.062\\[8pt]
2 | |
s | |
\varepsilon |
&=
1 | |
n(n-2) |
\left[nSyy-
2 | |
S | |
y |
-
2(nS | |
\widehat\beta | |
xx |
-
2) | |
S | |
x |
\right]=0.5762\\[8pt]
2 | |
s | |
\widehat{\beta} |
&=
| ||||||||||||
|
=3.1539\\[8pt]
2 | |
s | |
\widehat{\alpha} |
&=
2 | |
s | |
\widehat{\beta} |
1 | |
n |
Sxx=8.63185 \end{align}
The 0.975 quantile of Student's t-distribution with 13 degrees of freedom is, and thus the 95% confidence intervals for and are
\begin{align} &\alpha\in[\widehat{\alpha}\mp
* | |
t | |
13 |
s\alpha]=[{-45.4}, {-32.7}]\\[5pt] &\beta\in[\widehat{\beta}\mp
* | |
t | |
13 |
s\beta]=[57.4, 65.1] \end{align}
The product-moment correlation coefficient might also be calculated:
\widehat{r}=
nSxy-SxSy | ||||||||||||||||||
|
In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due to regression dilution.
Other estimation methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil–Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points).
Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data.
Sometimes it is appropriate to force the regression line to pass through the origin, because and are assumed to be proportional. For the model without the intercept term,, the OLS estimator for simplifies to
\widehat{\beta}=
| ||||||||||||||
|
=
\overline{xy | |
Substituting in place of gives the regression through :
\begin{align} \widehat\beta&=
| |||||||||
|
=
\overline{(x-h)(y-k) | |
where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias).The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.