Nonnegative rank (linear algebra) explained

In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative.

For example, the linear rank of a matrix is the smallest number of vectors, such that every column of the matrix can be written as a linear combination of those vectors. For the nonnegative rank, it is required that the vectors must have nonnegative entries, and also that the coefficients in the linear combinations are nonnegative.

Formal definition

There are several equivalent definitions, all modifying the definition of the linear rank slightly. Apart from the definition given above, there is the following: The nonnegative rank of a nonnegative m×n-matrix A is equal to the smallest number q such there exists a nonnegative m×q-matrix B and a nonnegative q×n-matrix C such that A = BC (the usual matrix product). To obtain the linear rank, drop the condition that B and C must be nonnegative.

Further, the nonnegative rank is the smallest number of nonnegative rank-one matrices into which the matrix can be decomposed additively:

where Rj ≥ 0 stands for "Rj is nonnegative".[1] (To obtain the usual linear rank, drop the condition that the Rj have to be nonnegative.)

Given a nonnegative

m x n

matrix A the nonnegative rank

rank+(A)

of A satisfies

A Fallacy

The rank of the matrix A is the largest number of columns which are linearly independent, i.e., none of the selected columns can be written as a linear combination of the other selected columns. It is not true that adding nonnegativity to this characterization gives the nonnegative rank: The nonnegative rank is in general less than or equal to the largest number of columns such that no selected column can be written as a nonnegative linear combination of the other selected columns.

Connection with the linear rank

It is always true that rank(A) ≤ rank+(A). In fact rank+(A) = rank(A) holds whenever rank(A) ≤ 2.

In the case rank(A) ≥ 3, however, rank(A) < rank+(A) is possible. For example, the matrix

A=\begin{bmatrix} 1&1&0&0\\ 1&0&1&0\\ 0&1&0&1\\ 0&0&1&1\end{bmatrix},

satisfies rank(A) = 3 < 4 = rank+(A).

These two results (including the 4×4 matrix example above) were first provided by Thomas in a response[2] to a question posed in 1973 by Berman and Plemmons.[3]

Computing the nonnegative rank

The nonnegative rank of a matrix can be determined algorithmically.[4]

It has been proved that determining whether

{{rank

}_+}(A)= \text(A) is NP-hard.[5]

Obvious questions concerning the complexity of nonnegative rank computation remain unanswered to date. For example, the complexity of determining the nonnegative rank of matrices of fixed rank k is unknown for k > 2.

Ancillary facts

Nonnegative rank has important applications in Combinatorial optimization:[6] The minimum number of facets of an extension of a polyhedron P is equal to the nonnegative rank of its so-called slack matrix.[7]

Notes and References

  1. Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences, SIAM
  2. L. B. Thomas. "Solution to Problem 73-14, Rank Factorization of Nonnegative Matrices", SIAM Review 16(3), 393-394, 1974
  3. Berman, A., Plemmons, R.J.: "Rank Factorization of Nonnegative Matrices", SIAM Review 15(3), 655, 1973
  4. J. Cohen and U. Rothblum. "Nonnegative ranks, decompositions and factorizationsof nonnegative matrices". Linear Algebra and its Applications, 190:149–168, 1993.
  5. Stephen Vavasis, On the complexity of nonnegative matrix factorization, SIAM Journal on Optimization 20 (3) 1364-1377, 2009.
  6. Mihalis Yannakakis. Expressing combinatorial optimization problems by linear programs. J. Comput. Syst. Sci., 43(3):441–466, 1991.
  7. See this blog post