An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.
RT=R-1
SO(n)
AT=-A
ak{so}(n)
An infinitesimal rotation matrix has the form
I+d\thetaA,
where
I
d\theta
A\inak{so}(n).
For example, if
A=Lx,
ak{so}(3),
dLx=\begin{bmatrix}1&0&0\ 0&1&-d\theta\ 0&d\theta&1\end{bmatrix}.
The computation rules for infinitesimal rotation matrices are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that the order in which infinitesimal rotations are applied is irrelevant.
An infinitesimal rotation matrix is a skew-symmetric matrix where:
The shape of the matrix is as follows:
Associated to an infinitesimal rotation matrix
A
d\Phi(t)=A-I
Dividing it by the time difference yields the angular velocity tensor:
\Omega=
d\Phi(t) | |
dt |
=\begin{pmatrix} 0&-\omegaz(t)&\omegay(t)\\ \omegaz(t)&0&-\omegax(t)\\ -\omegay(t)&\omegax(t)&0\\ \end{pmatrix}
These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. To understand what this means, consider
dAx=\begin{bmatrix}1&0&0\ 0&1&-d\theta\ 0&d\theta&1\end{bmatrix}.
First, test the orthogonality condition, . The product is
sf{T} | |
dA | |
x |
dAx=\begin{bmatrix}1&0&0\ 0&1+d\theta2&0\ 0&0&1+d\theta2\end{bmatrix},
differing from an identity matrix by second order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.
Next, examine the square of the matrix,
2 | |
dA | |
x |
=\begin{bmatrix}1&0&0\ 0&1-d\theta2&-2d\theta\ 0&2d\theta&1-d\theta2\end{bmatrix}.
Again discarding second order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,
dAy=\begin{bmatrix}1&0&d\phi\ 0&1&0\ -d\phi&0&1\end{bmatrix}.
Compare the products to,
\begin{align} dAxdAy&=\begin{bmatrix}1&0&d\phi\ d\thetad\phi&1&-d\theta\ -d\phi&d\theta&1\end{bmatrix}\\ dAydAx&=\begin{bmatrix}1&d\thetad\phi&d\phi\ 0&1&-d\theta\ -d\phi&d\theta&1\end{bmatrix}.\\ \end{align}
Since
d\thetad\phi
dAxdAy=dAydAx,
again to first order. In other words, .
This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the Baker–Campbell–Hausdorff formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second order infinitesimals one finds a bona fide vector space. Technically, this dismissal of any second order terms amounts to Group contraction.
See main article: Rotation matrix, Rotation group SO(3) and Infinitesimal transformation.
Suppose we specify an axis of rotation by a unit vector [''x'', ''y'', ''z''], and suppose we have an infinitely small rotation of angle Δθ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first order approach, the rotation matrix ΔR is represented as:
\DeltaR= \begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{bmatrix} + \begin{bmatrix} 0&z&-y\\ -z&0&x\\ y&-x&0 \end{bmatrix}\Delta\theta =I+A\Delta\theta.
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δθ as θ/N, where N is a large number, a rotation of θ about the axis may be represented as:
R=\left(I+
A\theta | |
N |
\right)N ≈ eA\theta.
It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector associated with the matrix A. This shows that the rotation matrix and the axis-angle format are related by the exponential function.
One can derive a simple expression for the generator G. One starts with an arbitrary plane[1] defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R, which includes the generator .
\begin{align} x&=a\cos\left(\alpha\right)+b\sin\left(\alpha\right)\ y&=-a\sin\left(\alpha\right)+b\cos\left(\alpha\right)\ \cos\left(\alpha\right)&=aTx\\ \sin\left(\alpha\right)&=bTx\ y&=-abTx+baTx=\left(baT-abT\right)x\ \ x'&=x\cos\left(\beta\right)+y\sin\left(\beta\right)\ &=\left[I\cos\left(\beta\right)+\left(baT-abT\right)\sin\left(\beta\right)\right]x\\ \ R&=I\cos\left(\beta\right)+\left(baT-abT\right)\sin\left(\beta\right)\ &=I\cos\left(\beta\right)+G\sin\left(\beta\right)\ \ G&=baT-abT\ \end{align}
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function.
\begin{align} Pab&=-G2\ R&=I-Pab+\left[I\cos\left(\beta\right)+G\sin\left(\beta\right)\right]Pab=eG\beta\ \end{align}
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
See main article: Matrix exponential. Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for For any skew-symmetric matrix, is always a rotation matrix.
An important practical example is the case. In rotation group SO(3), it is shown that one can identify every with an Euler vector, where is a unit magnitude vector.
By the properties of the identification, is in the null space of . Thus, is left invariant by and is hence a rotation axis.
Using Rodrigues' rotation formula on matrix form with, together with standard double angle formulae one obtains,
\begin{align} \exp(A)&{}=\exp(\theta(\boldsymbol{u ⋅ L})) =\exp\left(\left[\begin{smallmatrix}0&-z\theta&y\theta\ z\theta&0&-x\theta\ -y\theta&x\theta&0\end{smallmatrix}\right]\right)=\boldsymbol{I}+2\cos
\theta | \sin | |
2 |
\theta | |
2 |
~\boldsymbol{u ⋅ L}+
| ||||
2\sin |
~(\boldsymbol{u ⋅ L})2, \end{align}
Notice that for infinitesimal angles second order terms can be ignored and remains
O(n)
o(n)
O(n).
[A,B]=AB-BA.
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
\begin{align} {[}A,B{]}sf{T}&=Bsf{T}Asf{T}-Asf{T}Bsf{T}\\ &=(-B)(-A)-(-A)(-B)=BA-AB=-[A,B]. \end{align}
The matrix exponential of a skew-symmetric matrix
A
R
R=\exp(A)=
infty | |
\sum | |
n=0 |
An | |
n! |
.
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group
O(n),
SO(n),
R=\exp(A)
n=2,
n=2,
\begin{bmatrix} a&-b\\ b&a \end{bmatrix},
with
a2+b2=1
a=\cos\theta
b=\sin\theta,
\begin{bmatrix} \cos\theta&-\sin\theta\\ \sin\theta&\cos\theta \end{bmatrix}=\exp\left(\theta\begin{bmatrix} 0&-1\\ 1&0 \end{bmatrix}\right),
which corresponds exactly to the polar form
\cos\theta+i\sin\theta=ei
The exponential representation of an orthogonal matrix of order
n
n
R
R=QSQsf{T},
Q
n
\Sigma
S=\exp(\Sigma),
R=Q\exp(\Sigma)Qsf{T}=\exp(Q\SigmaQsf{T}),
Q\SigmaQsf{T}.