Proofs involving ordinary least squares explained

The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.

Derivation of the normal equations

Define the

i

th residual to be

ri=yi-

n
\sum
j=1

Xij\betaj.

Then the objective

S

can be rewritten

S=

m
\sum
i=1
2.
r
i

Given that S is convex, it is minimized when its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further – see maxima and minima.) The elements of the gradient vector are the partial derivatives of S with respect to the parameters:

\partialS
\partial\betaj
m
=2\sum
i=1
r
i\partialri
\partial\betaj

   (j=1,2,...,n).

The derivatives are

\partialri
\partial\betaj

=-Xij.

Substitution of the expressions for the residuals and the derivatives into the gradient equations gives

\partialS
\partial\betaj

=

m
2\sum
i=1

\left(yi-\sum

n
k=1

Xik\betak\right)(-Xij)    (j=1,2,...,n).

Thus if

\widehat\beta

minimizes S, we have
m
2\sum
i=1

\left(yi-\sum

n
k=1

Xik\widehat\betak\right)(-Xij)=0    (j=1,2,...,n).

Upon rearrangement, we obtain the normal equations:

m
\sum
i=1
n
\sum
k=1

XijXik\widehat\betak=\sum

m
i=1

Xijyi    (j=1,2,...,n).

The normal equations are written in matrix notation as

(XTX)\widehat{\boldsymbol{\beta}}=XTy

(where XT is the matrix transpose of X).

The solution of the normal equations yields the vector

\widehat{\boldsymbol{\beta}}

of the optimal parameter values.

Derivation directly in terms of matrices

The normal equations can be derived directly from a matrix representation of the problem as follows. The objective is to minimize

S(\boldsymbol{\beta})=l\|y-X\boldsymbol\betar\|2=(y-X\boldsymbol\beta)\rm(y-X\boldsymbol\beta)=y\rmy-\boldsymbol\beta\rmX\rmy-y\rmX\boldsymbol\beta+\boldsymbol\beta\rmX\rmX\boldsymbol\beta.

Here

(\boldsymbol\beta\rmX\rmy)\rm=y\rmX\boldsymbol\beta

has the dimension 1x1 (the number of columns of

y

), so it is a scalar and equal to its own transpose, hence

\boldsymbol\beta\rmX\rmy=y\rmX\boldsymbol\beta

and the quantity to minimize becomes

S(\boldsymbol{\beta})=y\rmy-2\boldsymbol\beta\rmX\rmy+\boldsymbol\beta\rmX\rmX\boldsymbol\beta.

Differentiating this with respect to

\boldsymbol\beta

and equating to zero to satisfy the first-order conditions gives

-X\rmy+(X\rmX){\boldsymbol{\beta}}=0,

which is equivalent to the above-given normal equations. A sufficient condition for satisfaction of the second-order conditions for a minimum is that

X

have full column rank, in which case

X\rmX

is positive definite.

Derivation without calculus

When

X\rmX

is positive definite, the formula for the minimizing value of

\boldsymbol\beta

can be derived without the use of derivatives. The quantity

S(\boldsymbol{\beta})=y\rmy-2\boldsymbol\beta\rmX\rmy+\boldsymbol\beta\rmX\rmX\boldsymbol\beta

can be written as

\langle\boldsymbol\beta,\boldsymbol\beta\rangle-2\langle\boldsymbol\beta,(X\rmX)-1X\rmy\rangle+\langle(X\rmX)-1X\rmy,(X\rmX)-1X\rmy\rangle+C,

where

C

depends only on

y

and

X

, and

\langle,\rangle

is the inner product defined by

\langlex,y\rangle=x\rm(X\rmX)y.

It follows that

S(\boldsymbol{\beta})

is equal to

\langle\boldsymbol\beta-(X\rmX)-1X\rmy,\boldsymbol\beta-(X\rmX)-1X\rmy\rangle+C

and therefore minimized exactly when

\boldsymbol\beta-(X\rmX)-1X\rmy=0.

Generalization for complex equations

In general, the coefficients of the matrices

X,\boldsymbol{\beta}

and

y

can be complex. By using a Hermitian transpose instead of a simple transpose, it is possible to find a vector

\boldsymbol{\widehat{\beta}}

which minimizes

S(\boldsymbol{\beta})

, just as for the real matrix case. In order to get the normal equations we follow a similar path as in previous derivations:

\displaystyleS(\boldsymbol{\beta})=\langley-X\boldsymbol{\beta},y-X\boldsymbol{\beta}\rangle=\langley,y\rangle-\overline{\langleX\boldsymbol{\beta},y\rangle}-{\overline{\langley,X\boldsymbol{\beta}\rangle}}+\langleX\boldsymbol{\beta},X\boldsymbol{\beta}\rangle=y\rm\overline{y

}-\boldsymbol^\dagger \mathbf^\dagger \mathbf -\mathbf^\dagger \mathbf \boldsymbol + \boldsymbol^ \mathbf ^ \overline \overline, where

\dagger

stands for Hermitian transpose.

We should now take derivatives of

S(\boldsymbol{\beta})

with respect to each of the coefficients

\betaj

, but first we separate real and imaginary parts to deal with the conjugate factors in above expression. For the

\betaj

we have

\betaj=

R
\beta
j

+

I
i\beta
j

and the derivatives change into

\partialS
\partial\betaj

=

\partialS
\partial
R
\beta
j
\partial
R
\beta
j
\partial\betaj

+

\partialS
\partial
I
\beta
j
\partial
I
\beta
j
\partial\betaj

=

\partialS
\partial
R
\beta
j

-i

\partialS
\partial
I
\beta
j

(j=1,2,3,\ldots,n).

After rewriting

S(\boldsymbol{\beta})

in the summation form and writing

\betaj

explicitly, we can calculate both partial derivatives with result:
\begin{align} \partialS
\partial
R
\beta
j

={}&

m
-\sum
i=1

(\overline{X}ijyi+\overline{y}iXij)+

m
2\sum
i=1

Xij\overline{X}ij

R
\beta
j

+

m
\sum
i=1
n
\sum
kj

(Xij\overline{X}ik\overline{\beta}k+\betakXik\overline{X}ij),\\[8pt] &{}-i{

\partialS
\partial
I
\beta
j
} = \sum_^m \Big(\overline_ y_i - \overline _i X_ - 2i\sum_^m X_\overline_ \beta_j^I + \sum_^m \sum_^n \Big(X_ \overline_ \overline_k - \beta_k X_ \overline_ \Big),\end

which, after adding it together and comparing to zero (minimization condition for

\boldsymbol{\widehat{\beta}}

) yields
m
\sum
i=1

Xij\overline{y}i=

m
\sum
i=1
n
\sum
k=1

Xij\overline{X}ik\overline{\widehat{\beta}}k    (j=1,2,3,\ldots,n).

In matrix form:

bf{X}\rm

} \overline = \textbf^ \overline \quad \text\quad \big (\textbf^\dagger \textbf \big) \boldsymbol = \textbf^\dagger \textbf.

Least squares estimator for β

Using matrix notation, the sum of squared residuals is given by

S(\beta)=(y-X\beta)T(y-X\beta).

Since this is a quadratic expression, the vector which gives the global minimum may be found via matrix calculus by differentiating with respect to the vector 

\beta

(using denominator layout) and setting equal to zero:

0=

dS
d\beta

(\widehat\beta)=

d
d\beta

(yTy-\betaTXTy-yTX\beta+\betaTX

TX\beta)|
\beta=\widehat\beta

=-2XTy+2XTX\widehat\beta

By assumption matrix X has full column rank, and therefore XTX is invertible and the least squares estimator for β is given by

\widehat\beta=(XTX)-1XTy

Unbiasedness and variance of

\widehat\beta

Plug y =  + ε into the formula for

\widehat\beta

and then use the law of total expectation:

\begin{align}\operatorname{E}[\widehat\beta]&=\operatorname{E}[(XTX)-1XT(X\beta+\varepsilon)]\\ &=\beta+\operatorname{E}[(XTX)-1XT\varepsilon]\\ &=\beta+\operatorname{E}[\operatorname{E}[(XTX)-1XT\varepsilon\midX]]\\ &=\beta+\operatorname{E}[(XTX)-1XT\operatorname{E}[\varepsilon\midX]] &=\beta, \end{align}

where E[''ε''|''X''] = 0 by assumptions of the model. Since the expected value of

\widehat{\beta}

equals the parameter it estimates,

\beta

, it is an unbiased estimator of

\beta

.

For the variance, let the covariance matrix of

\varepsilon

be

\operatorname{E}[\varepsilon\varepsilonT]=\sigma2I

(where

I

is the identity

m x m

matrix), and let X be a known constant.Then,

\begin{align} \operatorname{E}[(\widehat\beta-\beta)(\widehat\beta-\beta)T]&=\operatorname{E}[((XTX)-1XT\varepsilon)((XTX)-1XT\varepsilon)T]\\ &=\operatorname{E}[(XTX)-1XT\varepsilon\varepsilonTX(XTX)-1]\\ &=(XTX)-1XT\operatorname{E}[\varepsilon\varepsilonT]X(XTX)-1\\ &=(XTX)-1XT\sigma2X(XTX)-1\\ &=\sigma2(XTX)-1XTX(XTX)-1\\ &=\sigma2(XTX)-1, \end{align}

where we used the fact that

\widehat{\beta}-\beta

is just an affine transformation of

\varepsilon

by the matrix

(XTX)-1XT

.

For a simple linear regression model, where

\beta=[\beta0,\beta

T
1]
(

\beta0

is the y-intercept and

\beta1

is the slope), one obtains

\begin{align} \sigma2(XTX)-1&= \sigma2\left(\begin{pmatrix}1&1&\\x1&x2&\end{pmatrix}\begin{pmatrix}1&x1\\1&x2\\vdots&\vdots\end{pmatrix}\right)-1\\[6pt] &=\sigma2

m
\left(\sum
i=1

\begin{pmatrix}1&xi\\xi&

2\end{pmatrix}
x
i

\right)-1\\[6pt] &=\sigma2\begin{pmatrix}m&\sumxi\\\sumxi&\sum

2\end{pmatrix}
x
i

-1\\[6pt] &=\sigma2

1
m\sum
2-(\sum
x
i
2
x
i)

\begin{pmatrix}\sum

2&
x
i

-\sumxi\\-\sumxi&m\end{pmatrix}\\[6pt] &=\sigma2

1
m\sum{(xi-\bar{x

)2}}\begin{pmatrix}\sum

2&
x
i

-\sumxi\\-\sumxi&m\end{pmatrix}\\[8pt] \operatorname{Var}(\widehat\beta1)&=

\sigma2
m
\sum(xi-\bar{x
i=1

)2}. \end{align}

Expected value and biasedness of

\widehat\sigma2

First we will plug in the expression for y into the estimator, and use the fact that X'M = MX = 0 (matrix M projects onto the space orthogonal to X):

\widehat\sigma2=\tfrac{1}{n}y'My=\tfrac{1}{n}(X\beta+\varepsilon)'M(X\beta+\varepsilon)=\tfrac{1}{n}\varepsilon'M\varepsilon

Now we can recognize ε as a 1×1 matrix, such matrix is equal to its own trace. This is useful because by properties of trace operator, tr(AB) = tr(BA), and we can use this to separate disturbance ε from matrix M which is a function of regressors X:

\operatorname{E}\widehat\sigma2=\tfrac{1}{n}\operatorname{E}[\operatorname{tr}(\varepsilon'M\varepsilon)]=\tfrac{1}{n}\operatorname{tr}(\operatorname{E}[M\varepsilon\varepsilon'])

Using the Law of iterated expectation this can be written as

\operatorname{E}\widehat\sigma2=\tfrac{1}{n}\operatorname{tr}(\operatorname{E}[M\operatorname{E}[\varepsilon\varepsilon'|X]]) =\tfrac{1}{n}\operatorname{tr}(\operatorname{E}[\sigma2MI]) =\tfrac{1}{n}\sigma2\operatorname{E}[\operatorname{tr}M]

Recall that M = I - P where P is the projection onto linear space spanned by columns of matrix X. By properties of a projection matrix, it has p = rank(X) eigenvalues equal to 1, and all other eigenvalues are equal to 0. Trace of a matrix is equal to the sum of its characteristic values, thus tr(P) = p, and tr(M) = n - p. Therefore,

\operatorname{E}\widehat\sigma2=

n-p
n

\sigma2

Since the expected value of

\widehat\sigma2

does not equal the parameter it estimates,

\sigma2

, it is a biased estimator of

\sigma2

. Note in the later section “Maximum likelihood” we show that under the additional assumption that errors are distributed normally, the estimator

\widehat\sigma2

is proportional to a chi-squared distribution with n – p degrees of freedom, from which the formula for expected value would immediately follow. However the result we have shown in this section is valid regardless of the distribution of the errors, and thus has importance on its own.

Consistency and asymptotic normality of

\widehat\beta

Estimator

\widehat\beta

can be written as

\widehat\beta=(\tfrac{1}{n}X'X)-1\tfrac{1}{n}X'y=\beta+(\tfrac{1}{n}X'X)-1\tfrac{1}{n}X'\varepsilon=\beta+ (

1
n
n
\sum
i=1

xix'

-1
(
i)
1
n
n
\sum
i=1

xi\varepsiloni)

We can use the law of large numbers to establish that
1
n
n
\sum
i=1

xix'i\xrightarrow{p}\operatorname{E}[xix

i']=Qxx
n

,   

1
n
n
\sum
i=1

xi\varepsiloni\xrightarrow{p}\operatorname{E}[xi\varepsiloni]=0

By Slutsky's theorem and continuous mapping theorem these results can be combined to establish consistency of estimator

\widehat\beta

:

\widehat\beta\xrightarrow{p}\beta+

-1
nQ
xx

0=\beta

The central limit theorem tells us that

1
\sqrt{n
}\sum_^n x_i\varepsilon_i\ \xrightarrow\ \mathcal\big(0,\,V\big), where

V=\operatorname{Var}[xi\varepsiloni]=

2x
\operatorname{E}[\varepsilon
ix'

i]=

2\mid
\operatorname{E}[\operatorname{E}[\varepsilon
i

xi]xix'i]=\sigma2

Qxx
n

Applying Slutsky's theorem again we'll have

\sqrt{n}(\widehat\beta-\beta)=(

1
n
n
\sum
i=1

xix'

-1
(
i)
1
\sqrt{n
}\sum_^n x_i\varepsilon_i\bigg)\ \xrightarrow\ Q_^n\cdot\mathcal\big(0, \sigma^2\frac\big) = \mathcal\big(0,\sigma^2Q_^n\big)

Maximum likelihood approach

Maximum likelihood estimation is a generic technique for estimating the unknown parameters in a statistical model by constructing a log-likelihood function corresponding to the joint distribution of the data, then maximizing this function over all possible parameter values. In order to apply this method, we have to make an assumption about the distribution of y given X so that the log-likelihood function can be constructed. The connection of maximum likelihood estimation to OLS arises when this distribution is modeled as a multivariate normal.

Specifically, assume that the errors ε have multivariate normal distribution with mean 0 and variance matrix σ2I. Then the distribution of y conditionally on X is

y\midX\siml{N}(X\beta,\sigma2I)

and the log-likelihood function of the data will be

\begin{align} l{L}(\beta,\sigma2\midX)&=ln(

1
(2\pi)n/2(\sigma2)n/2
-1
2
(y-X\beta)'(\sigma2I)-1(y-X\beta)
e

)\\[6pt] &=-

n
2

ln2\pi-

n
2

ln\sigma2-

1
2\sigma2

(y-X\beta)'(y-X\beta) \end{align}

Differentiating this expression with respect to β and σ2 we'll find the ML estimates of these parameters:

\begin{align}

\partiall{L
} & = -\frac\Big(-2X'y + 2X'X\beta\Big)=0 \quad\Rightarrow\quad \widehat\beta = (X'X)^X'y \\[6pt] \frac & = -\frac \frac + \frac(y-X\beta)'(y-X\beta)=0 \quad\Rightarrow\quad \widehat\sigma^ = \frac (y-X\widehat\beta)'(y-X\widehat\beta) = \frac S(\widehat\beta) \endWe can check that this is indeed a maximum by looking at the Hessian matrix of the log-likelihood function.

Finite-sample distribution

Since we have assumed in this section that the distribution of error terms is known to be normal, it becomes possible to derive the explicit expressions for the distributions of estimators

\widehat\beta

and

\widehat\sigma2

:

\widehat\beta=(X'X)-1X'y=(X'X)-1X'(X\beta+\varepsilon)=\beta+(X'X)-1X'l{N}(0,\sigma2I)

so that by the affine transformation properties of multivariate normal distribution

\widehat\beta\midX\siml{N}(\beta,\sigma2(X'X)-1).

Similarly the distribution of

\widehat\sigma2

follows from

\begin{align} \widehat\sigma2&=\tfrac{1}{n}(y-X(X'X)-1X'y)'(y-X(X'X)-1X'y)\\[5pt] &=\tfrac{1}{n}(My)'My\\[5pt] &=\tfrac{1}{n}(X\beta+\varepsilon)'M(X\beta+\varepsilon)\\[5pt] &=\tfrac{1}{n}\varepsilon'M\varepsilon, \end{align}

where

M=I-X(X'X)-1X'

is the symmetric projection matrix onto subspace orthogonal to X, and thus MX = XM = 0. We have argued before that this matrix rank n – p, and thus by properties of chi-squared distribution,

\tfrac{n}{\sigma2}\widehat\sigma2\midX=

2
(\varepsilon/\sigma)'M(\varepsilon/\sigma)\sim\chi
n-p

Moreover, the estimators

\widehat\beta

and

\widehat\sigma2

turn out to be independent (conditional on X), a fact which is fundamental for construction of the classical t- and F-tests. The independence can be easily seen from following: the estimator

\widehat\beta

represents coefficients of vector decomposition of

\widehat{y}=X\widehat\beta=Py=X\beta+P\varepsilon

by the basis of columns of X, as such

\widehat\beta

is a function of . At the same time, the estimator

\widehat\sigma2

is a norm of vector divided by n, and thus this estimator is a function of . Now, random variables (, ) are jointly normal as a linear transformation of ε, and they are also uncorrelated because PM = 0. By properties of multivariate normal distribution, this means that and are independent, and therefore estimators

\widehat\beta

and

\widehat\sigma2

will be independent as well.

Derivation of simple linear regression estimators

We look for

\widehat{\alpha}

and

\widehat{\beta}

that minimize the sum of squared errors (SSE):

min\widehat{\alpha,\widehat{\beta}}\operatorname{SSE}\left(\widehat{\alpha},\widehat{\beta}\right)\equivmin\widehat{\alpha,\widehat{\beta}}

n
\sum
i=1

\left(yi-\widehat{\alpha}-\widehat{\beta}

2
x
i\right)

To find a minimum take partial derivatives with respect to

\widehat{\alpha}

and

\widehat{\beta}

\begin{align} &

\partial
\partial\widehat{\alpha
} \left (\operatorname \left(\widehat, \widehat\right) \right) = -2\sum_^n \left(y_i - \widehat - \widehatx_i\right) = 0 \\[4pt] \Rightarrow &\sum_^n \left(y_i - \widehat - \widehatx_i\right) = 0 \\[4pt] \Rightarrow &\sum_^n y_i = \sum_^n \widehat + \widehat\sum_^n x_i \\[4pt] \Rightarrow &\sum_^n y_i = n\widehat + \widehat\sum_^n x_i \\[4pt] \Rightarrow &\frac\sum_^n y_ = \widehat + \frac \widehat\sum_^n x_i \\[4pt] \Rightarrow &\bar = \widehat + \widehat\bar\end

Before taking partial derivative with respect to

\widehat{\beta}

, substitute the previous result for

\widehat{\alpha}.

min\widehat{\alpha,\widehat{\beta}}

n
\sum
i=1

\left[yi-\left(\bar{y}-\widehat{\beta}\bar{x}\right)-\widehat{\beta}xi\right]2=min\widehat{\alpha,\widehat{\beta}}

n
\sum
i=1

\left[\left(yi-\bar{y}\right)-\widehat{\beta}\left(xi-\bar{x}\right)\right]2

Now, take the derivative with respect to

\widehat{\beta}

:

\begin{align} &

\partial
\partial\widehat{\beta
} \left (\operatorname \left(\widehat, \widehat\right) \right)= -2\sum_^n \left[\left(y_{i} - \bar{y}\right) - \widehat{\beta}\left(x_{i} - \bar{x}\right)\right]\left(x_-\bar\right) = 0 \\ \Rightarrow &\sum_^n \left(y_i - \bar\right)\left(x_i - \bar\right) - \widehat\sum_^n \left(x_i - \bar\right)^2 = 0 \\ \Rightarrow & \widehat = \frac = \frac\end

And finally substitute

\widehat{\beta}

to determine

\widehat{\alpha}

\widehat{\alpha}=\bar{y}-\widehat{\beta}\bar{x}