Commutation matrix explained

In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):

K(m,n) vec(A) = vec(AT) .Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another:

\operatorname{vec}(A)=[A1,1,\ldots,Am,1,A1,2,\ldots,Am,2,\ldots,A1,n,\ldots,Am,n]T

where A = ['''A'''<sub>''i'',''j''</sub>]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order.

In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator [1]

Properties

P\pi

, where

\pi

is the permutation over

\{1,...,mn\}

for which

\pi(i+m(j-1))=j+n(i-1),i=1,...,m,j=1,...,n.

K(r,(AB)K(n,=BA.

This property is often used in developing the higher order statistics of Wishart covariance matrices.[2]

K(r,m)(vw)=wv.

This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.

K(r,=

r
\sum
i=1
m
\sum
j=1

\left(er,i{em,j

}^\right) \otimes \left(\mathbf_ ^\right)= \sum_^r \sum_^m \left(\mathbf_ \otimes \mathbf_\right) \left(\mathbf_ \otimes \mathbf_\right)^.

K(m,n)=\begin{bmatrix} K1,1&&K1,n\\ \vdots&\ddots&\vdots\\ Km,1&&Km,n, \end{bmatrix},

Where the p,q entry of n x m block-matrix Ki,j is given by

Kij(p,q)=\begin{cases} 1&i=qandj=p,\\ 0&otherwise. \end{cases}

For example,

K(3,4)=\left[\begin{array}{ccc|ccc|ccc|ccc} 1&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0\\ \hline 0&1&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0\\ \hline 0&0&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1\end{array}\right].

Code

For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by the code below.

Python

import numpy as np

def comm_mat(m, n): # determine permutation applied by K w = np.arange(m * n).reshape((m, n), order="F").T.ravel(order="F")

# apply this permutation to the rows (i.e. to each column) of identity matrix and return result return np.eye(m * n)[w, :]

Alternatively, a version without imports:

  1. Kronecker delta

def delta(i, j): return int(i

j)

def comm_mat(m, n): # determine permutation applied by K v = [m * j + i for i in range(m) for j in range(n)]

# apply this permutation to the rows (i.e. to each column) of identity matrix I = delta(i, j) for j in range(m * n) for i in range(m * n)] return [I[i] for i in v]

MATLAB

function P = com_mat(m, n)

% determine permutation applied by KA = reshape(1:m*n, m, n);v = reshape(A', 1, []);

% apply this permutation to the rows (i.e. to each column) of identity matrixP = eye(m*n);P = P(v,:);

R

  1. Sparse matrix version

comm_mat = function(m, n)

Example

Let

A

denote the following

3 x 2

matrix:

A=\begin{bmatrix}1&4\ 2&5\\ 3&6\\ \end{bmatrix}.

A

has the following column-major and row-major vectorizations (respectively):

vcol=\operatorname{vec}(A)=\begin{bmatrix}1\ 2\\ 3\\ 4\ 5\ 6\ \end{bmatrix},vrow=\operatorname{vec}(AT)= \begin{bmatrix}1\ 4\\ 2\\ 5\ 3\ 6\ \end{bmatrix}.

The associated commutation matrix is

K=K(3,2)=\begin{bmatrix} 1&&&&&\\ &&&1&&\\ &1&&&&\\ &&&&1&\\ &&1&&&\\ &&&&&1\\ \end{bmatrix},

(where each

denotes a zero). As expected, the following holds:

KTK=KKT=I6

Kvcol=vrow

References

Notes and References

  1. Book: Watrous, John. 2018. The Theory of Quantum Information. Cambridge University Press. 94.
  2. von Rosen. Dietrich. 1988. Moments for the Inverted Wishart Distribution. Scand. J. Stat.. 15. 97–109.