In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.
The explanatory (independent) variables resulting from the polynomial expansion of the "baseline" variables are known as higher-degree terms. Such variables are also used in classification settings.[1]
Polynomial regression models are usually fit using the method of least squares. The least-squares method minimizes the variance of the unbiased estimators of the coefficients, under the conditions of the Gauss - Markov theorem. The least-squares method was published in 1805 by Legendre and in 1809 by Gauss. The first design of an experiment for polynomial regression appeared in an 1815 paper of Gergonne.[2] [3] In the twentieth century, polynomial regression played an important role in the development of regression analysis, with a greater emphasis on issues of design and inference.[4] More recently, the use of polynomial models has been complemented by other methods, with non-polynomial models having advantages for some classes of problems.
The goal of regression analysis is to model the expected value of a dependent variable y in terms of the value of an independent variable (or vector of independent variables) x. In simple linear regression, the model
y=\beta0+\beta1x+\varepsilon,
is used, where ε is an unobserved random error with mean zero conditioned on a scalar variable x. In this model, for each unit increase in the value of x, the conditional expectation of y increases by β1 units.
In many settings, such a linear relationship may not hold. For example, if we are modeling the yield of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find that the yield improves by increasing amounts for each unit increase in temperature. In this case, we might propose a quadratic model of the form
y=\beta0+\beta1x+\beta2x2+\varepsilon.
In this model, when the temperature is increased from x to x + 1 units, the expected yield changes by
\beta1+\beta2(2x+1).
\beta1+2\beta2x.
In general, we can model the expected value of y as an nth degree polynomial, yielding the general polynomial regression model
y=\beta0+\beta1x+\beta2x2+\beta3x3+ … +\betanxn+\varepsilon.
Conveniently, these models are all linear from the point of view of estimation, since the regression function is linear in terms of the unknown parameters β0, β1, .... Therefore, for least squares analysis, the computational and inferential problems of polynomial regression can be completely addressed using the techniques of multiple regression. This is done by treating x, x2, ... as being distinct independent variables in a multiple regression model.
The polynomial regression model
yi=\beta0+\beta1xi+\beta2
2 | |
x | |
i |
+ … +\betam
m | |
x | |
i |
+\varepsiloni (i=1,2,...,n)
can be expressed in matrix form in terms of a design matrix
X
\vecy
\vec\beta
\vec\varepsilon
X
\vecy
\begin{bmatrix}y1\ y2\ y3\ \vdots\ yn\end{bmatrix}=\begin{bmatrix}1&x1&
2 | |
x | |
1 |
&...&
m | |
x | |
1 |
\ 1&x2&
2 | |
x | |
2 |
&...&
m | |
x | |
2 |
\ 1&x3&
2 | |
x | |
3 |
&...&
m | |
x | |
3 |
\ \vdots&\vdots&\vdots&\ddots&\vdots\ 1&xn&
2 | |
x | |
n |
&...&
m | |
x | |
n |
\end{bmatrix}\begin{bmatrix}\beta0\ \beta1\ \beta2\ \vdots\ \betam\end{bmatrix}+\begin{bmatrix}\varepsilon1\ \varepsilon2\ \varepsilon3\ \vdots\ \varepsilonn\end{bmatrix},
which when using pure matrix notation is written as
\vecy=X\vec\beta+\vec\varepsilon.
The vector of estimated polynomial regression coefficients (using ordinary least squares estimation) is
\widehat{\vec\beta}=(XTX)-1 XT\vecy,
assuming m < n which is required for the matrix to be invertible; then since
X
xi
The above matrix equations explain the behavior of polynomial regression well. However, to physically implement polynomial regression for a set of xy point pairs, more detail is useful. The below matrix equations for polynomial coefficients are expanded from regression theory without derivation and easily implemented.[5] [6] [7]
\begin{bmatrix}
0 | |
\sum | |
i |
&
1 | |
\sum | |
i |
&
2 | |
\sum | |
i |
& … &
m | |
\sum | |
i |
1 | |
\ \sum | |
i |
&
2 | |
\sum | |
i |
&
3 | |
\sum | |
i |
& … &
m+1 | |
\sum | |
i |
2 | |
\ \sum | |
i |
&
3 | |
\sum | |
i |
&
4 | |
\sum | |
i |
& … &
m+2 | |
\sum | |
i |
\ \vdots&\vdots&\vdots&\ddots&\vdots
m | |
\\ \sum | |
i |
&
m+1 | |
\sum | |
i |
&
m+2 | |
\sum | |
i |
&...&
2m | |
\sum | |
i |
\ \end{bmatrix} \begin{bmatrix}\beta0\\ \beta1\\ \beta2\\ … \\ \betam\\ \end{bmatrix} = \begin{bmatrix}
ny | |
\sum | |
ix |
0 | |
i |
ny | |
\\ \sum | |
ix |
1 | |
i |
ny | |
\\ \sum | |
ix |
2 | |
i |
\\ …
ny | |
\\ \sum | |
ix |
m | |
i |
\\ \end{bmatrix}
After solving the above system of linear equations for
\beta0through\betam
\begin{align}& \widehat{y}=
0 | |
\beta | |
0x |
+
1 | |
\beta | |
1x |
+
2 | |
\beta | |
2x |
+ … +
m \ & | |
\beta | |
mx |
\ & Where: \ & n=numberofxiyivariablepairsinthedata \ & m=orderofthepolynomialtobeusedforregression \ & \beta(0-m)=polynomialcoefficientforeachcorrespondingx(0-m)\ & \widehat{y}=estimatedyvariablebasedonthepolynomialregressioncalculations. \end{align}
Although polynomial regression is technically a special case of multiple linear regression, the interpretation of a fitted polynomial regression model requires a somewhat different perspective. It is often difficult to interpret the individual coefficients in a polynomial regression fit, since the underlying monomials can be highly correlated. For example, x and x2 have correlation around 0.97 when x is uniformly distributed on the interval (0, 1). Although the correlation can be reduced by using orthogonal polynomials, it is generally more informative to consider the fitted regression function as a whole. Point-wise or simultaneous confidence bands can then be used to provide a sense of the uncertainty in the estimate of the regression function.
Polynomial regression is one example of regression analysis using basis functions to model a functional relationship between two quantities. More specifically, it replaces
x\in
dx | |
R |
\varphi(x)\in
d\varphi | |
R |
[1,x]n{\stackrel{\varphi}{ → }}[1,x,x2,\ldots,xd]
The goal of polynomial regression is to model a non-linear relationship between the independent and dependent variables (technically, between the independent variable and the conditional mean of the dependent variable). This is similar to the goal of nonparametric regression, which aims to capture non-linear regression relationships. Therefore, non-parametric regression approaches such as smoothing can be useful alternatives to polynomial regression. Some of these methods make use of a localized form of classical polynomial regression.[9] An advantage of traditional polynomial regression is that the inferential framework of multiple regression can be used (this also holds when using other families of basis functions such as splines).
A final alternative is to use kernelized models such as support vector regression with a polynomial kernel.
If residuals have unequal variance, a weighted least squares estimator may be used to account for that.[10]