A CUR matrix approximation is a set of three matrices that, when multiplied together, closely approximate a given matrix.[1] [2] [3] A CUR approximation can be used in the same way as the low-rank approximation of the singular value decomposition (SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors):
Formally, a CUR matrix approximation of a matrix A is three matrices C, U, and R such that C is made from columns of A, R is made from rows of A, and that the product CUR closely approximates A. Usually the CUR is selected to be a rank-k approximation, which means that C contains k columns of A, R contains k rows of A, and U is a k-by-k matrix. There are many possible CUR matrix approximations, and many CUR matrix approximations for a given rank.
The CUR matrix approximation is often used in place of the low-rank approximation of the SVD in principal component analysis. The CUR is less accurate, but the columns of the matrix C are taken from A and the rows of R are taken from A. In PCA, each column of A contains a data sample; thus, the matrix C is made of a subset of data samples. This is much easier to interpret than the SVD's left singular vectors, which represent the data in a rotated space. Similarly, the matrix R is made of a subset of variables measured for each data sample. This is easier to comprehend than the SVD's right singular vectors, which are another rotations of the data in space.
Hamm and Huang[4] gives the following theorem describing the basics of a CUR decomposition of a matrix
L
r
Theorem: Consider row and column indices
I,J\subseteq[n]
|I|,|J|\ger
C=L:,J,
U=LI,J
R=LI,:
U
L
L=CU+R
( ⋅ )+
In other words, if
L
U=LI,J
R
C
L
L
The CUR matrix approximation is not unique and there are multiple algorithms for computing one. One is ALGORITHMCUR.
The "Linear Time CUR" algorithm [5] simply picks J by sampling columns randomly (with replacement) with probability proportional to the squared column norms,
\|L:,j
2 | |
\| | |
2 |
\|Li
2 | |
\| | |
2 |
|J| ≈ k/\varepsilon4
|I| ≈ k/\varepsilon2
0\le\varepsilon
\|A-CUR\|F\le\|A-Ak\|F+\varepsilon\|A\|F
Ak
Tensor-CURT decomposition[6] is a generalization of matrix-CUR decomposition. Formally, a CURT tensor approximation of a tensor A is three matrices and a (core-)tensor C, R, T and U such that C is made from columns of A, R is made from rows of A, T is made from tubes of A and that the product U(C,R,T) (where the
i,j,l
\sumi',j',l'Ui',j',l'Ci,i'Rj,j'Tl,l'