Non-negative least squares explained
In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix and a (column) vector of response variables, the goal is to find
\operatorname{argmin}\limitsx\|Ax-
subject to .
Here means that each component of the vector should be non-negative, and denotes the Euclidean norm.
Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC and non-negative matrix/tensor factorization.[1] [2] The latter can be considered a generalization of NNLS.
Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds .[3]
Quadratic programming version
The NNLS problem is equivalent to a quadratic programming problem
\operatorname{argmin}\limitsx\ge0\left(
xTQx+cTx\right),
where = and = . This problem is convex, as is positive semidefinite and the non-negativity constraints form a convex feasible set.[4]
Algorithms
The first widely used algorithm for solving this problem is an active-set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems.[5] In pseudocode, this algorithm looks as follows:[6]
- Inputs:
- a real-valued matrix of dimension,
- a real-valued vector of dimension,
- a real value, the tolerance for the stopping criterion.
- Initialize:
- Set .
- Set .
- Set to an all-zero vector of dimension .
- Set .
- Let denote the sub-vector with indexes from R
- Main loop: while and :
- Let in be the index of in .
- Add to .
- Remove from .
- Let be restricted to the variables included in .
- Let be vector of same length as . Let denote the sub-vector with indexes from P, and let denote the sub-vector with indexes from R.
- Set
- Set to zero
- While :
- Let .
- Set to .
- Move to all indices in such that .
- Set
- Set to zero.
- Set to .
- Set to .
- Output: x
This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at a reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse . Variants of this algorithm are available in MATLAB as the routine [7] [8] and in SciPy as .[9]
Many improved algorithms have been suggested since 1974. Fast NNLS (FNNLS) is an optimized version of the Lawson—Hanson algorithm. Other algorithms include variants of Landweber's gradient descent method[10] and coordinate-wise optimization based on the quadratic programming problem above.
See also
Notes and References
- Lin . Chih-Jen. Projected Gradient Methods for Nonnegative Matrix Factorization . 10.1162/neco.2007.19.10.2756 . Neural Computation. 19 . 10 . 2756–2779 . 2007 . 17716011. 10.1.1.308.9135.
- Random projections for the nonnegative least-squares problem . Christos . Boutsidis . Petros . Drineas . Linear Algebra and Its Applications . 431 . 5–7 . 2009 . 760–771 . 10.1016/j.laa.2009.03.026. 0812.4547 .
- Stark . Philip B. . Robert L. . Parker . Bounded-variable least-squares: an algorithm and applications . Computational Statistics . 10 . 1995 . 129 .
- Book: 10.1007/11556121_50. 3691. 407–414. Lecture Notes in Computer Science. 2005. Franc. Vojtěch. Hlaváč. Václav. Navara. Mirko. Computer Analysis of Images and Patterns . Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem . 978-3-540-28969-2.
- Book: Lawson . Charles L. . Hanson . Richard J. . Solving Least Squares Problems . 1995 . SIAM . 23. Linear Least Squares with Linear Inequality Constraints . 161. 10.1137/1.9781611971217.ch23 . 978-0-89871-356-5 .
- 10.1002/(SICI)1099-128X(199709/10)11:5<393::AID-CEM483>3.0.CO;2-L . A fast non-negativity-constrained least squares algorithm . Journal of Chemometrics . 11 . 5 . 393 . 1997 . Bro . Rasmus . De Jong . Sijmen.
- Web site: lsqnonneg . MATLAB Documentation . October 28, 2022.
- Robert J. Plemmons . Chen . Donghui . Robert J. . Plemmons . Nonnegativity constraints in numerical analysis . Symposium on the Birth of Numerical Analysis . 2009 . 10.1.1.157.9203.
- Web site: scipy.optimize.nnls . SciPy v0.13.0 Reference Guide . 25 January 2014.
- 10.1016/j.mcm.2005.12.010. The application of an oblique-projected Landweber method to a model of supervised learning. Mathematical and Computer Modelling. 43. 7–8. 892. 2006. Johansson . B. R. . Elfving . T. . Kozlov . V. . Censor . Y. . Forssén . P. E. . Granlund . G. S. . free .