A
B
A\astB=\left(Aij ⊗ Bij\right)ij
in which the ij-th block is the sized Kronecker product of the corresponding blocks of A and B, assuming the number of row and column partitions of both matrices is equal. The size of the product is then .
For example, if A and B both are partitioned matrices e.g.:
A=\left[ \begin{array}{c|c} A11&A12\\ \hline A21&A22\end{array} \right] =\left[ \begin{array}{cc|c} 1&2&3\\ 4&5&6\\ \hline 7&8&9 \end{array} \right] , B=\left[ \begin{array}{c|c} B11&B12\\ \hline B21&B22\end{array} \right] =\left[ \begin{array}{c|cc} 1&4&7\\ \hline 2&5&8\\ 3&6&9 \end{array} \right] ,
we obtain:
A\astB=\left[ \begin{array}{c|c} A11 ⊗ B11&A12 ⊗ B12\\ \hline A21 ⊗ B21&A22 ⊗ B22\end{array} \right] = \left[ \begin{array}{cc|cc} 1&2&12&21\\ 4&5&24&42\\ \hline 14&16&45&72\\ 21&24&54&81 \end{array} \right].
This is a submatrix of the Tracy–Singh product[3] of the two matrices (each partition in this example is a partition in a corner of the Tracy–Singh product).
The column-wise Kronecker product of two matrices is a special case of the Khatri-Rao product as defined above, and may also be called the Khatri–Rao product. This product assumes the partitions of the matrices are their columns. In this case,, and for each j: . The resulting product is a matrix of which each column is the Kronecker product of the corresponding columns of A and B. Using the matrices from the previous examples with the columns partitioned:
C=\left[ \begin{array}{c|c|c} C1&C2&C3 \end{array} \right] =\left[ \begin{array}{c|c|c} 1&2&3\\ 4&5&6\\ 7&8&9 \end{array} \right] , D=\left[ \begin{array}{c|c|c} D1&D2&D3 \end{array} \right] =\left[ \begin{array}{c|c|c} 1&4&7\\ 2&5&8\\ 3&6&9 \end{array} \right] ,
so that:
C\astD =\left[ \begin{array}{c|c|c} C1 ⊗ D1&C2 ⊗ D2&C3 ⊗ D3 \end{array} \right] = \left[ \begin{array}{c|c|c} 1&8&21\\ 2&10&24\\ 3&12&27\\ 4&20&42\\ 8&25&48\\ 12&30&54\\ 7&32&63\\ 14&40&72\\ 21&48&81 \end{array} \right].
This column-wise version of the Khatri–Rao product is useful in linear algebra approaches to data analytical processing[4] and in optimizing the solution of inverse problems dealing with a diagonal matrix.[5] [6]
In 1996 the column-wise Khatri–Rao product was proposed to estimate the angles of arrival (AOAs) and delays of multipath signals[7] and four coordinates of signals sources[8] at a digital antenna array.
An alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows, was proposed by V. Slyusar[9] in 1996.[10] [11] [12] [13] This matrix operation was named the "face-splitting product" of matrices or the "transposed Khatri–Rao product". This type of operation is based on row-by-row Kronecker products of two matrices. Using the matrices from the previous examples with the rows partitioned:
C=\begin{bmatrix} C1\\\hline C2\\\hline C3\\ \end{bmatrix} =\begin{bmatrix} 1&2&3\\\hline 4&5&6\\\hline 7&8&9 \end{bmatrix} , D=\begin{bmatrix} D1\\\hline D2\\\hline D3\\ \end{bmatrix} =\begin{bmatrix} 1&4&7\\\hline 2&5&8\\\hline 3&6&9 \end{bmatrix} ,
the result can be obtained:
C\bullD = \begin{bmatrix} C1 ⊗ D1\\\hlineC2 ⊗ D2\\\hline C3 ⊗ D3\\ \end{bmatrix} = \begin{bmatrix} 1&4&7&2&8&14&3&12&21\\\hline 8&20&32&10&25&40&12&30&48\\\hline 21&42&63&24&48&72&27&54&81 \end{bmatrix}.
\begin{align} &\left(\begin{bmatrix} 1&0\\ 0&1\\ 1&0 \end{bmatrix} \bullet \begin{bmatrix} 1&0\\ 1&0\\ 0&1 \end{bmatrix} \right) \left(\begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} ⊗ \begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} \right) \left(\begin{bmatrix} \sigma1&0\\ 0&\sigma2\\ \end{bmatrix} ⊗ \begin{bmatrix} \rho1&0\\ 0&\rho2\\ \end{bmatrix} \right) \left(\begin{bmatrix} x1\\ x2 \end{bmatrix} \ast \begin{bmatrix} y1\\ y2 \end{bmatrix} \right) \\[5pt] {}={}&\left(\begin{bmatrix} 1&0\\ 0&1\\ 1&0 \end{bmatrix} \bullet \begin{bmatrix} 1&0\\ 1&0\\ 0&1 \end{bmatrix} \right) \left(\begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} \begin{bmatrix} \sigma1&0\\ 0&\sigma2\\ \end{bmatrix} \begin{bmatrix} x1\\ x2 \end{bmatrix} ⊗ \begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} \begin{bmatrix} \rho1&0\\ 0&\rho2\\ \end{bmatrix} \begin{bmatrix} y1\\ y2 \end{bmatrix} \right) \\[5pt] {}={}& \begin{bmatrix} 1&0\\ 0&1\\ 1&0 \end{bmatrix} \begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} \begin{bmatrix} \sigma1&0\\ 0&\sigma2\\ \end{bmatrix} \begin{bmatrix} x1\\ x2 \end{bmatrix} \circ \begin{bmatrix} 1&0\\ 1&0\\ 0&1 \end{bmatrix} \begin{bmatrix} 1&1\\ 1&-1 \end{bmatrix} \begin{bmatrix} \rho1&0\\ 0&\rho2\\ \end{bmatrix} \begin{bmatrix} y1\\ y2 \end{bmatrix} . \end{align}
If
M=T(1)\bullet...\bulletT(c)
T(1),...,T(c)
T
T1,...,Tm\inRd
2\right] | |
E\left[(T | |
1x) |
=
2 | |
\left\|x\right\| | |
2 |
E\left[(T1x)p\right]
| ||||
\le\sqrt{ap}\|x\|2
then for any vector
x
\left|\left\|Mx\right\|2-\left\|x\right\|2\right|<\varepsilon\left\|x\right\|2
with probability
1-\delta
m=(4a)2c\varepsilon-2log1/\delta+(2ae)\varepsilon-1(log1/\delta)c.
In particular, if the entries of
T
\pm1
m=O\left(\varepsilon-2log1/\delta+\varepsilon-1\left(
1 | |
c |
log1/\delta\right)c\right)
which matches the Johnson–Lindenstrauss lemma of
m=O\left(\varepsilon-2log1/\delta\right)
\varepsilon
According to the definition of V. Slyusar the block face-splitting product of two partitioned matrices with a given quantity of rows in blocks
A=\left[ \begin{array}{c|c} A11&A12\\ \hline A21&A22\end{array} \right] , B=\left[ \begin{array}{c|c} B11&B12\\ \hline B21&B22\end{array} \right] ,
can be written as :
A[\bull]B=\left[ \begin{array}{c|c} A11\bullB11&A12\bullB12\\ \hline A21\bullB21&A22\bullB22\end{array} \right].
The transposed block face-splitting product (or Block column-wise version of the Khatri–Rao product) of two partitioned matrices with a given quantity of columns in blocks has a view:
A[\ast]B=\left[ \begin{array}{c|c} A11\astB11&A12\astB12\\ \hline A21\astB21&A22\astB22\end{array} \right].
\left(A[\ast]B\right)sf{T}=bf{A}sf{T}[\bull]Bsf{T}
The Face-splitting product and the Block Face-splitting product used in the tensor-matrix theory of digital antenna arrays. These operations are also used in: