Faddeev–LeVerrier algorithm explained

pA(λ)=\det(λIn-A)

of a square matrix,, named after Dmitry Konstantinovich Faddeev and Urbain Le Verrier. Calculation of this polynomial yields the eigenvalues of as its roots; as a matrix polynomial in the matrix itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity

λ

; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix

A

.

The algorithm has been independently rediscovered several times in different forms. It was first published in 1840 by Urbain Le Verrier, subsequently redeveloped by P. Horst, Jean-Marie Souriau, in its present form here by Faddeev and Sominsky, and further by J. S. Frame, and others.[1] [2] [3] [4] [5] (For historical points, see Householder.[6] An elegant shortcut to the proof, bypassing Newton polynomials, was introduced by Hou.[7] The bulk of the presentation here follows Gantmacher, p. 88.[8])

The Algorithm

The objective is to calculate the coefficients of the characteristic polynomial of the matrix,

pA(λ)\equiv\det(λIn-A)=\sum

n
k=0

ckλk~,

where, evidently, = 1 and 0 = (−1)n det .

The coefficients are determined by induction on, using an auxiliary sequence of matrices

\begin{align} M0&\equiv0&cn&=1    &(k=0)\\ Mk&\equivAMk-1+cn-k+1I       &cn-k&=-

1
k

tr(AMk)    &k=1,\ldots,n~. \end{align}

Thus,

M1=I~,cn-1=-trA=-cntrA;

M2=A-ItrA,cn-2=-

1
2

l(trA2-(trA)2r)= -

1
2

(cntrA2+cn-1trA);

M3=A2-AtrA-

1
2

l(trA2-(trA)2r)I,

cn-3=-\tfrac{1}{6}l((\operatorname{tr}A)3-3\operatorname{tr}(A2)(\operatorname{tr}A)+2\operatorname{tr}(A

3)r)=-1
3

(cntr

3+c
A
n-1

trA2+cn-2trA);

etc.,[9] [10]   ...;

Mm=

m
\sum
k=1

cn-m+kAk-1~,

cn-m=-

1
m

(cntr

m+c
A
n-1

trAm-1+...+cn-m+1trA)=-

1
m
m
\sum
k=1

cn-m+ktrAk~;...

Observe terminates the recursion at . This could be used to obtain the inverse or the determinant of .

Derivation

The proof relies on the modes of the adjugate matrix,, the auxiliary matrices encountered.   This matrix is defined by

B=(λI-A)-1I~pA(λ)~.

It is evidently a matrix polynomial in of degree . Thus,

B\equiv

n-1
\sum
k=0

λk~Bk=

n
\sum
k=0

λk~Mn-k,

where one may define the harmless ≡0.

Inserting the explicit polynomial forms into the defining equation for the adjugate, above,

n
\sum
k=0

λk+1Mn-k-λk(AMn-k+ckI)=0~.

Now, at the highest order, the first term vanishes by =0; whereas at the bottom order (constant in, from the defining equation of the adjugate, above),

MnA=B0A=c0~,

so that shifting the dummy indices of the first term yields
n
\sum
k=1

λk (M1+n-k-AMn-k+ckI )=0~,

which thus dictates the recursion

\therefore    Mm=AMm-1+cn-m+1I~,

for =1,...,. Note that ascending index amounts to descending in powers of, but the polynomial coefficients are yet to be determined in terms of the s and .

This can be easiest achieved through the following auxiliary equation (Hou, 1998),

λ

\partialpA(λ)
\partialλ

-np=\operatorname{tr}AB~.

This is but the trace of the defining equation for by dint of Jacobi's formula,
\partialpA(λ)
\partialλ

=pA(λ)

infty
\sum
m=0

λ-(m+1)\operatorname{tr}Am=pA(λ)~\operatorname{tr}

I
λI-A

\equiv\operatorname{tr}B~.

Inserting the polynomial mode forms in this auxiliary equation yields

n
\sum
k=1

λk (kck-nck-\operatorname{tr}AMn-k )=0~,

so that
n-1
\sum
m=1

λn-m (mcn-m+\operatorname{tr}AMm )=0~,

and finally

\therefore    cn-m=-

1
m

\operatorname{tr}AMm~.

This completes the recursion of the previous section, unfolding in descending powers of .

Further note in the algorithm that, more directly,

Mm=AMm-1-

1
m-1

(\operatorname{tr}AMm-1)I~,

and, in comportance with the Cayley–Hamilton theorem,

\operatorname{adj}(A)=(-1)n-1Mn=(-1)n-1(An-1+cn-1An-2+...+c2A+c1I)=(-1)n-1

n
\sum
k=1

ckAk-1~.

The final solution might be more conveniently expressed in terms of complete exponential Bell polynomials as

cn-k=

(-1)n-k
k!

{lB}kl(\operatorname{tr}A,-1!~\operatorname{tr}A2,2!~\operatorname{tr}A3,\ldots,(-1)k-1(k-1)!~\operatorname{tr}Akr).

Example

{\displaystyleA=\left[{\begin{array}{rrr}3&1&5\\3&3&1\\4&6&4\end{array}}\right]}

{\displaystyle{\begin{aligned}M0&=\left[{\begin{array}{rrr}0&0&0\\0&0&0\\0&0&0\end{array}}\right]&&&c3&&&&&=&1\\M\color{blue1}&=\left[{\begin{array}{rrr}1&0&0\\0&1&0\\0&0&1\end{array}}\right]&A~M1&=\left[{\begin{array}{rrr}\color{red3}&1&5\\3&\color{red3}&1\\4&6&\color{red4}\end{array}}\right]&c2&&&=-{

1
\color{blue1

}}\color{red10}&&=&-10\\M\color{blue2}&=\left[{\begin{array}{rrr}-7&1&5\\3&-7&1\\4&6&-6\end{array}}\right]    &A~M2&=\left[{\begin{array}{rrr}\color{red2}&26&-14\\-8&\color{red-12}&12\\6&-14&\color{red2}\end{array}}\right]    &c1&&&=-{

1
\color{blue2

}}\color{red(-8)}&&=&4\\M\color{blue3}&=\left[{\begin{array}{rrr}6&26&-14\\-8&-8&12\\6&-14&6\end{array}}\right]    &A~M3&=\left[{\begin{array}{rrr}\color{red40}&0&0\\0&\color{red40}&0\\0&0&\color{red40}\end{array}}\right]    &c0&&&=-{

1
\color{blue3

}}\color{red120}&&=&-40\end{aligned}}}

Furthermore,

{\displaystyleM4=A~M3+c0~I=0}

, which confirms the above calculations.

The characteristic polynomial of matrix is thus

{\displaystylepA(λ)3-10λ2+4λ-40}

; the determinant of is

{\displaystyle\det(A)=(-1)3c0=40}

; the trace is 10=−c2; and the inverse of is

{\displaystyleA-1=-{

1
c0
}~ M_= \left[{\begin{array}{rrr}6&26&-14\\-8&-8&12\\6&-14&6\end{array}}\right]=\left[{\begin{array}{rrr}0{.}15&0{.}65&-0{.}35\\-0{.}20&-0{.}20&0{.}30\\0{.}15&-0{.}35&0{.}15\end{array}}\right]} .

An equivalent but distinct expression

A compact determinant of an ×-matrix solution for the above Jacobi's formula may alternatively determine the coefficients,[11] [12]

cn-m=

(-1)m
m!

\begin{vmatrix}\operatorname{tr}A&m-1&0&&0\\ \operatorname{tr}A2&\operatorname{tr}A&m-2&&0\\ \vdots&\vdots&&&\vdots\\ \operatorname{tr}Am-1&\operatorname{tr}Am-2&&&1\\ \operatorname{tr}Am&\operatorname{tr}Am-1&&&\operatorname{tr}A\end{vmatrix}~.

See also

References

Barbaresco F. (2019) Souriau Exponential Map Algorithm for Machine Learning on Matrix Lie Groups. In: Nielsen F., Barbaresco F. (eds) Geometric Science of Information. GSI 2019. Lecture Notes in Computer Science, vol 11712. Springer, Cham. https://doi.org/10.1007/978-3-030-26980-7_10

Notes and References

  1. [Urbain Le Verrier]
  2. Paul Horst: A method of determining the coefficients of a characteristic equation. Ann. Math. Stat. 6 83-84 (1935),
  3. [Jean-Marie Souriau]
  4. D. K. Faddeev, and I. S. Sominsky, Sbornik zadatch po vyshej algebra (Problems in higher algebra, Mir publishers, 1972), Moscow-Leningrad (1949). Problem 979.
  5. J. S. Frame: A simple recursion formula for inverting a matrix (abstract), Bull. Am. Math. Soc. 55 1045 (1949),
  6. Book: Householder, Alston S.. The Theory of Matrices in Numerical Analysis . Dover Books on Mathematics. 2006. Alston Scott Householder . 0486449726.
  7. Hou, S. H. (1998). "Classroom Note: A Simple Proof of the Leverrier--Faddeev Characteristic Polynomial Algorithm" SIAM review 40(3) 706-709, .
  8. Book: Gantmacher, F.R. . The Theory of Matrices . 1960. Chelsea Publishing. NY . 0-8218-1376-5 .
  9. Zadeh, Lotfi A. and Desoer, Charles A. (1963, 2008). Linear System Theory: The State Space Approach (Mc Graw-Hill; Dover Civil and Mechanical Engineering), pp 303 - 305;
  10. Abdeljaoued, Jounaidi and Lombardi, Henri (2004). Méthodes matricielles - Introduction à la complexité algébrique, (Mathématiques et Applications, 42) Springer, .
  11. Brown, Lowell S. (1994). Quantum Field Theory, Cambridge University Press., p. 54; Also see, Curtright, T. L., Fairlie, D. B. and Alshal, H. (2012). "A Galileon Primer", arXiv:1212.6972, section 3.
  12. Book: Methods of Modern Mathematical Physics. Reed. M.. Simon. B.. ACADEMIC PRESS, INC.. 1978. 0-12-585004-2. 4 Analysis of Operators. USA. 323–333, 340, 343.