Constrained least squares explained

In constrained least squares one solves a linear least squares problem with an additional constraint on the solution.[1] [2] This means, the unconstrained equation

X\boldsymbol{\beta}=y

must be fit as closely as possible (in the least squares sense) while ensuring that some other property of

\boldsymbol{\beta}

is maintained.

There are often special-purpose algorithms for solving such problems efficiently. Some examples of constraints are given below:

\boldsymbol{\beta}

must exactly satisfy

L\boldsymbol{\beta}=d

(see Ordinary least squares).

\boldsymbol{\beta}

must satisfy

L\boldsymbol{\beta}=d+\nu

, where

\nu

is a vector of random variables such that

\operatorname{E}(\nu)=0

and

\operatorname{E}(\nu\nu\rm)=\tau2I

. This effectively imposes a prior distribution for

\boldsymbol{\beta}

and is therefore equivalent to Bayesian linear regression.[3]

\boldsymbol{\beta}

must satisfy

\|L\boldsymbol{\beta}-y\|\le\alpha

(choosing

\alpha

in proportion to the noise standard deviation of y prevents over-fitting).

\boldsymbol{\beta}

must satisfy the vector inequality

\boldsymbol{\beta}\geq\boldsymbol{0}

defined componentwise—that is, each component must be either positive or zero.

\boldsymbol{\beta}

must satisfy the vector inequalities

\boldsymbol{b}\ell\leq\boldsymbol{\beta}\leq\boldsymbol{b}u

, each of which is defined componentwise.

\boldsymbol{\beta}

must be integers (instead of real numbers).

\boldsymbol{\beta}

must be real numbers, or multiplied by the same complex number of unit modulus.

If the constraint only applies to some of the variables, the mixed problem may be solved using separable least squares[4] by letting

X=

[X1
X2

]

and

\beta\rm=

\rmT
[\beta1
\rmT
\beta2

]

represent the unconstrained (1) and constrained (2) components. Then substituting the least-squares solution for
\beta1
, i.e.

\hat{\boldsymbol{\beta}}1=

+
X
1

(y-X2\boldsymbol{\beta}2)

(where + indicates the Moore–Penrose pseudoinverse) back into the original expression gives (following some rearrangement) an equation that can be solved as a purely constrained problem in

\beta2

.

PX2\boldsymbol{\beta}2=Py,

where

P:=I-X1

+
X
1
is a projection matrix. Following the constrained estimation of

\hat{\boldsymbol\beta}2

the vector

\hat{\boldsymbol{\beta}}1

is obtained from the expression above.

See also

Notes and References

  1. Book: Amemiya, Takeshi . Takeshi Amemiya

    . Takeshi Amemiya . Advanced Econometrics . Oxford . Basil Blackwell . 1985 . 0-631-15583-X . Model 1 with Linear Constraints . 20–26 .

  2. Book: Boyd, Stephen . Lieven . Vandenberghe. Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares. 2018. Cambridge University Press. 978-1-316-51896-0.
  3. Book: Fomby, Thomas B. . R. Carter . Hill . Stanley R. . Johnson . Advanced Econometric Methods . New York . Springer-Verlag . Corrected softcover . 1988 . 0-387-96868-7 . Use of Prior Information . 80–121 .
  4. Book: Bjork, Ake . Numerical Methods for Least Squares Problems . Philadelphia . SIAM . 1996 . 0898713609. Separable and Constrained Problems . 351 .