Linear regression explained
In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression.[1] This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.[2] If the explanatory variables are measured with error then errors-in-variables models are required, also known as measurement error models.
In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.[3] Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis.
Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
- If the goal is error i.e variance reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response.
- If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response.
Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Use of the Mean Squared Error (MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many large outliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.
Formulation
of
n statistical units, a linear regression model assumes that the relationship between the dependent variable
y and the vector of regressors
x is
linear. This relationship is modeled through a
disturbance term or
error variable ε — an unobserved
random variable that adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the form
where
T denotes the
transpose, so that
xiTβ is the
inner product between
vectors xi and
β.
Often these n equations are stacked together and written in matrix notation as
y=X\boldsymbol\beta+\boldsymbol\varepsilon,
where
y=\begin{bmatrix}y1\ y2\ \vdots\ yn\end{bmatrix},
X=\begin{bmatrix}
\ \vdots
\end{bmatrix}
=\begin{bmatrix}1&x11& … &x1p\\
1&x21& … &x2p\\
\vdots&\vdots&\ddots&\vdots\\
1&xn1& … &xnp\end{bmatrix},
\boldsymbol\beta=\begin{bmatrix}\beta0\ \beta1\ \beta2\ \vdots\ \betap\end{bmatrix},
\boldsymbol\varepsilon=\begin{bmatrix}\varepsilon1\ \varepsilon2\ \vdots\ \varepsilonn\end{bmatrix}.
Notation and terminology
is a vector of observed values
of the variable called the
regressand,
endogenous variable,
response variable,
target variable,
measured variable,
criterion variable, or
dependent variable. This variable is also sometimes known as the
predicted variable, but this should not be confused with
predicted values, which are denoted
. The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality.
may be seen as a matrix of
row-vectors
or of
n-dimensional
column-vectors
, which are known as
regressors,
exogenous variables,
explanatory variables,
covariates,
input variables,
predictor variables, or
independent variables (not to be confused with the concept of independent random variables). The matrix
is sometimes called the
design matrix.
- Usually a constant is included as one of the regressors. In particular,
for
. The corresponding element of
β is called the
intercept. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero.
- Sometimes one of the regressors can be a non-linear function of another regressor or of the data values, as in polynomial regression and segmented regression. The model remains linear as long as it is linear in the parameter vector β.
- The values xij may be viewed as either observed values of random variables Xj or as fixed values chosen prior to observing the dependent variable. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations.
is a
-dimensional
parameter vector, where
is the intercept term (if one is included in the model—otherwise
is
p-dimensional). Its elements are known as
effects or
regression coefficients (although the latter term is sometimes reserved for the
estimated effects). In
simple linear regression,
p=1, and the coefficient is known as
regression slope. Statistical
estimation and
inference in linear regression focuses on
β. The elements of this parameter vector are interpreted as the
partial derivatives of the dependent variable with respect to the various independent variables.
is a vector of values
. This part of the model is called the
error term,
disturbance term, or sometimes
noise (in contrast with the "signal" provided by the rest of the model). This variable captures all other factors which influence the dependent variable
y other than the regressors
x. The relationship between the error term and the regressors, for example their
correlation, is a crucial consideration in formulating a linear regression model, as it will determine the appropriate estimation method.
Fitting a linear model to a given data set usually requires estimating the regression coefficients
such that the error term
\boldsymbol\varepsilon=y-X\boldsymbol\beta
is minimized. For example, it is common to use the sum of squared errors
| 2 |
\|\boldsymbol\varepsilon\| | |
| 2 |
as a measure of
for minimization.
Example
Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as
hi=\beta1ti+\beta2
+\varepsiloni,
where
β1 determines the initial velocity of the ball,
β2 is proportional to the
standard gravity, and
εi is due to measurement errors. Linear regression can be used to estimate the values of
β1 and
β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters
β1 and
β2; if we take regressors
xi = (
xi1,
xi2) = (
ti,
ti2), the model takes on the standard form
Assumptions
Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variables and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.
Notes and References
- Book: David A. Freedman . David A. Freedman . 2009. Statistical Models: Theory and Practice . Cambridge University Press. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression e right hand side, each with its own slope coefficient . 26.
- .
- Hilary L. Seal. 1967 . The historical development of the Gauss linear model . 2333849 . Biometrika . 54 . 1/2 . 1–24 . 10.1093/biomet/54.1-2.1.