In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
The exponential of a matrix A is defined by
eA\equiv
infty | |
\sum | |
n=0 |
An | |
n! |
Because the exponential function is not bijective for complex numbers (e.g.
e\pi=e3=-1
B
logB,
elog=B.
If B is sufficiently close to the identity matrix, then a logarithm of B may be computed by means of the power series
log(B)=log(I-(I-B))=
infty | |
-\sum | |
k=1 |
(I-B)k | |
k |
\left\|I-B\right\|<1
elog(B)=B
The rotations in the plane give a simple example. A rotation of angle α around the origin is represented by the 2×2-matrix
A= \begin{pmatrix} \cos(\alpha)&-\sin(\alpha)\\ \sin(\alpha)&\cos(\alpha)\\ \end{pmatrix}.
For any integer n, the matrix
Bn=(\alpha+2\pin) \begin{pmatrix} 0&-1\\ 1&0\\ \end{pmatrix},
log(A)=Bn~
Bn | |
~~e |
=A
Bn | |
e |
=
infty{1 | |
\sum | |
k=0 |
\over
k | |
k!}B | |
n |
~
0 | |
(B | |
n) |
=1~I2,
1= (\alpha+2\pi | |
(B | |
n) |
n)\begin{pmatrix} 0&-1\\ +1&0\\ \end{pmatrix},
2= (\alpha+2\pi | |
(B | |
n) |
n)2\begin{pmatrix} -1&0\\ 0&-1\\ \end{pmatrix},
3= (\alpha+2\pi | |
(B | |
n) |
n)3\begin{pmatrix} 0&+1\\ -1&0\\ \end{pmatrix},
4= (\alpha+2\pi | |
(B | |
n) |
4~I | |
n) | |
2 |
...
infty{1 | |
\sum | |
k=0 |
\over
k | |
k!}B | |
n |
=\begin{pmatrix} \cos(\alpha)&-\sin(\alpha)\\ \sin(\alpha)&\cos(\alpha)\\ \end{pmatrix}=A~.
qed.
Thus, the matrix A has infinitely many logarithms. This corresponds to the fact that the rotation angle is only determined up to multiples of 2π.
In the language of Lie theory, the rotation matrices A are elements of the Lie group SO(2). The corresponding logarithms B are elements of the Lie algebra so(2), which consists of all skew-symmetric matrices. The matrix
\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}
is a generator of the Lie algebra so(2).
The question of whether a matrix has a logarithm has the easiest answer when considered in the complex setting. A complex matrix has a logarithm if and only if it is invertible.[2] The logarithm is not unique, but if a matrix has no negative real eigenvalues, then there is a unique logarithm that has eigenvalues all lying in the strip
\{z\inC \vert -\pi<it{Im} z<\pi\}
The answer is more involved in the real setting. A real matrix has a real logarithm if and only if it is invertible and each Jordan block belonging to a negative eigenvalue occurs an even number of times. If an invertible real matrix does not satisfy the condition with the Jordan blocks, then it has only non-real logarithms. This can already be seen in the scalar case: no branch of the logarithm can be real at -1. The existence of real matrix logarithms of real 2×2 matrices is considered in a later section.
If A and B are both positive-definite matrices, then
\operatorname{tr}{log{(AB)}}=\operatorname{tr}{log{(A)}}+\operatorname{tr}{log{(B)}}.
Suppose that A and B commute, meaning that AB = BA. Then
log{(AB)}=log{(A)}+log{(B)}
\operatorname{arg}(\muj)+\operatorname{arg}(\nuj)\in(-\pi,\pi]
\muj
A
\nuj
B
log(AB)=log(A)+log(B)
log{(A-1)}=-log{(A)}.
Similarly, for non-commuting
A
B
log{(A+tB)}=log{(A)}+
infty | |
t\int | |
0 |
dz~
I | |
A+zI |
B
I | |
A+zI |
+O(t2).
log{(A+tB)}
t
log{(X+λI)}-log{(X)}=
λ | |
\int | |
0 |
dz
I | |
X+zI |
,
X=A
X=A+tB
λ → infty
A rotation ∈ SO(3) in
R
The logarithm of such a rotation matrix can be readily computed from the antisymmetric part of Rodrigues' rotation formula, explicitly in Axis angle. It yields the logarithm of minimal Frobenius norm, but fails when has eigenvalues equal to -1 where this is not unique.
Further note that, given rotation matrices A and B,
dg(A,B):=\|log(ATB)\|F
A method for finding log A for a diagonalizable matrix A is the following:
Find the matrix V of eigenvectors of A (each column of V is an eigenvector of A).
Find the inverse V-1 of V.
Let
A'=V-1AV.
Then will be a diagonal matrix whose diagonal elements are eigenvalues of A.
Replace each diagonal element of by its (natural) logarithm in order to obtain
logA'
Then
logA=V(logA')V-1.
That the logarithm of A might be a complex matrix even if A is real then follows from the fact that a matrix with real and positive entries might nevertheless have negative or even complex eigenvalues (this is true for example for rotation matrices). The non-uniqueness of the logarithm of a matrix follows from the non-uniqueness of the logarithm of a complex number.
The algorithm illustrated above does not work for non-diagonalizable matrices, such as
\begin{bmatrix}1&1\ 0&1\end{bmatrix}.
For such matrices one needs to find its Jordan decomposition and, rather than computing the logarithm of diagonal entries as above, one would calculate the logarithm of the Jordan blocks.
The latter is accomplished by noticing that one can write a Jordan block as
B=\begin{pmatrix} λ&1&0&0& … &0\\ 0&λ&1&0& … &0\\ 0&0&λ&1& … &0\\ \vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&0&0&λ&1\\ 0&0&0&0&0&λ\\\end{pmatrix} = λ\begin{pmatrix} 1&λ-1&0&0& … &0\\ 0&1&λ-1&0& … &0\\ 0&0&1&λ-1& … &0\\ \vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&0&0&1&λ-1\\ 0&0&0&0&0&1\\\end{pmatrix}=λ(I+K)
Then, by the Mercator series
log(1+x)=x-
x2 | + | |
2 |
x3 | - | |
3 |
x4 | |
4 |
+ …
logB=log(λ(I+K))=log(λI)+log(I+K)=(logλ)I+K-
K2 | + | |
2 |
K3 | - | |
3 |
K4 | |
4 |
+ …
This series has a finite number of terms (Km is zero if m is equal to or greater than the dimension of K), and so its sum is well-defined.
Example. Using this approach, one finds
log\begin{bmatrix}1&1\ 0&1\end{bmatrix} =\begin{bmatrix}0&1\ 0&0\end{bmatrix},
A square matrix represents a linear operator on the Euclidean space Rn where n is the dimension of the matrix. Since such a space is finite-dimensional, this operator is actually bounded.
Using the tools of holomorphic functional calculus, given a holomorphic function f defined on an open set in the complex plane and a bounded linear operator T, one can calculate f(T) as long as f is defined on the spectrum of T.
The function f(z) = log z can be defined on any simply connected open set in the complex plane not containing the origin, and it is holomorphic on such a domain. This implies that one can define ln T as long as the spectrum of T does not contain the origin and there is a path going from the origin to infinity not crossing the spectrum of T (e.g., if the spectrum of T is a circle with the origin inside of it, it is impossible to define ln T).
The spectrum of a linear operator on Rn is the set of eigenvalues of its matrix, and so is a finite set. As long as the origin is not in the spectrum (the matrix is invertible), the path condition from the previous paragraph is satisfied, and ln T is well-defined. The non-uniqueness of the matrix logarithm follows from the fact that one can choose more than one branch of the logarithm which is defined on the set of eigenvalues of a matrix.
ak{g}
\exp:ak{g} → G.
For matrix Lie groups, the elements of
ak{g}
log=\exp-1
ak{g}
\underline{0}\inak{g}
\underline{1}\inG
log:G\supsetV → U\subsetak{g}.
An important corollary of Jacobi's formula then is
log(\det(A))=tr(logA)~.
If a 2 × 2 real matrix has a negative determinant, it has no real logarithm. Note first that any 2 × 2 real matrix can be considered one of the three types of the complex number z = x + y ε, where 'ε2 ∈ . This z is a point on a complex subplane of the ring of matrices.
The case where the determinant is negative only arises in a plane with ε2 =+1, that is a split-complex number plane. Only one quarter of this plane is the image of the exponential map, so the logarithm is only defined on that quarter (quadrant). The other three quadrants are images of this one under the Klein four-group generated by ε and −1.
For example, let a = log 2 ; then cosh a = 5/4 and sinh a = 3/4.For matrices, this means that
A=\exp\begin{pmatrix}0&a\ a&0\end{pmatrix}=\begin{pmatrix}\cosha&\sinha\ \sinha&\cosha\end{pmatrix}= \begin{pmatrix}1.25&0.75\ 0.75&1.25\end{pmatrix}
logA=\begin{pmatrix}0&log2\ log2&0\end{pmatrix}
These matrices, however, do not have a logarithm:
\begin{pmatrix}3/4&5/4\ 5/4&3/4\end{pmatrix}, \begin{pmatrix}-3/4&-5/4\ -5/4&-3/4\end{pmatrix}, \begin{pmatrix}-5/4&-3/4\ -3/4&-5/4\end{pmatrix}
A non-singular 2 × 2 matrix does not necessarily have a logarithm, but it is conjugate by the four-group to a matrix that does have a logarithm.
It also follows, that, e.g., a square root of this matrix A is obtainable directly from exponentiating (logA)/2,
\sqrt{A}=\begin{pmatrix}\cosh((log2)/2)&\sinh((log2)/2)\ \sinh((log2)/2)&\cosh((log2)/2)\end{pmatrix}= \begin{pmatrix}1.06&0.35\ 0.35&1.06\end{pmatrix}~.
For a richer example, start with a Pythagorean triple (p,q,r)and let . Then
ea=
p+r | |
q |
=\cosha+\sinha
Now
\exp\begin{pmatrix}0&a\ a&0\end{pmatrix}= \begin{pmatrix}r/q&p/q\ p/q&r/q\end{pmatrix}
\tfrac{1}{q}\begin{pmatrix}r&p\ p&r\end{pmatrix}
\begin{pmatrix}0&a\ a&0\end{pmatrix}