Generalized eigenvector explained

In linear algebra, a generalized eigenvector of an

n x n

matrix

A

is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

Let

V

be an

n

-dimensional vector space and let

A

be the matrix representation of a linear map from

V

to

V

with respect to some ordered basis.

There may not always exist a full set of

n

linearly independent eigenvectors of

A

that form a complete basis for

V

. That is, the matrix

A

may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue

λi

is greater than its geometric multiplicity (the nullity of the matrix

(AiI)

, or the dimension of its nullspace). In this case,

λi

is called a defective eigenvalue and

A

is called a defective matrix.

A generalized eigenvector

xi

corresponding to

λi

, together with the matrix

(AiI)

generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of

V

.

Using generalized eigenvectors, a set of linearly independent eigenvectors of

A

can be extended, if necessary, to a complete basis for

V

. This basis can be used to determine an "almost diagonal matrix"

J

in Jordan normal form, similar to

A

, which is useful in computing certain matrix functions of

A

. The matrix

J

is also useful in solving the system of linear differential equations

x'=Ax,

where

A

need not be diagonalizable.

The dimension of the generalized eigenspace corresponding to a given eigenvalue

λ

is the algebraic multiplicity of

λ

.

Overview and definition

There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector

u

associated with an eigenvalue

λ

of an

n

×

n

matrix

A

is a nonzero vector for which

(A-λI)u=0

, where

I

is the

n

×

n

identity matrix and

0

is the zero vector of length

n

. That is,

u

is in the kernel of the transformation

(A-λI)

. If

A

has

n

linearly independent eigenvectors, then

A

is similar to a diagonal matrix

D

. That is, there exists an invertible matrix

M

such that

A

is diagonalizable through the similarity transformation

D=M-1AM

. The matrix

D

is called a spectral matrix for

A

. The matrix

M

is called a modal matrix for

A

. Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily.

On the other hand, if

A

does not have

n

linearly independent eigenvectors associated with it, then

A

is not diagonalizable.

Definition: A vector

xm

is a generalized eigenvector of rank m of the matrix

A

and corresponding to the eigenvalue

λ

if

(A-λI)mxm=0

but

(A-λI)m-1xm\ne0.

Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every

n

×

n

matrix

A

has

n

linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix

J

in Jordan normal form. That is, there exists an invertible matrix

M

such that

J=M-1AM

. The matrix

M

in this case is called a generalized modal matrix for

A

. If

λ

is an eigenvalue of algebraic multiplicity

\mu

, then

A

will have

\mu

linearly independent generalized eigenvectors corresponding to

λ

. These results, in turn, provide a straightforward method for computing certain matrix functions of

A

.

Note: For an

n x n

matrix

A

over a field

F

to be expressed in Jordan normal form, all eigenvalues of

A

must be in

F

. That is, the characteristic polynomial

f(x)

must factor completely into linear factors. For example, if

A

has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values.

The set spanned by all generalized eigenvectors for a given

λ

forms the generalized eigenspace for

λ

.

Examples

Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later.

Example 1

This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks.Suppose

A=\begin{pmatrix}1&1\ 0&1\end{pmatrix}.

Then there is only one eigenvalue,

λ=1

, and its algebraic multiplicity is

m=2

.

Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space

V

is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of

A-λI

to be

p=1

, and thus there are

m-p=1

generalized eigenvectors of rank greater than 1.

The ordinary eigenvector

v1=\begin{pmatrix}1\\0\end{pmatrix}

is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector

v2

by solving

(AI)v2=v1.

Writing out the values:

\left(\begin{pmatrix}1&1\ 0&1\end{pmatrix}-1\begin{pmatrix}1&0\ 0&1\end{pmatrix}\right)\begin{pmatrix}v21\\v22\end{pmatrix}=\begin{pmatrix}0&1\ 0&0\end{pmatrix}\begin{pmatrix}v21\\v22\end{pmatrix}= \begin{pmatrix}1\\0\end{pmatrix}.

This simplifies to

v22=1.

The element

v21

has no restrictions. The generalized eigenvector of rank 2 is then

v2=\begin{pmatrix}a\\1\end{pmatrix}

, where a can have any scalar value. The choice of a = 0 is usually the simplest.

Note that

(AI)v2=\begin{pmatrix}0&1\ 0&0\end{pmatrix}\begin{pmatrix}a\\1\end{pmatrix}= \begin{pmatrix}1\\0\end{pmatrix}=v1,

so that

v2

is a generalized eigenvector, because

(AI)2v2=(AI)[(AI)v2]=(AI)v1=\begin{pmatrix}0&1\ 0&0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}= \begin{pmatrix}0\\0\end{pmatrix}=0,

so that

v1

is an ordinary eigenvector, and that

v1

and

v2

are linearly independent and hence constitute a basis for the vector space

V

.

Example 2

This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order.The matrix

A=\begin{pmatrix}1&0&0&0&0\\ 3&1&0&0&0\\ 6&3&2&0&0\\ 10&6&3&2&0\\ 15&10&6&3&2 \end{pmatrix}

has eigenvalues

λ1=1

and

λ2=2

with algebraic multiplicities

\mu1=2

and

\mu2=3

, but geometric multiplicities

\gamma1=1

and

\gamma2=1

.

The generalized eigenspaces of

A

are calculated below.

x1

is the ordinary eigenvector associated with

λ1

.

x2

is a generalized eigenvector associated with

λ1

.

y1

is the ordinary eigenvector associated with

λ2

.

y2

and

y3

are generalized eigenvectors associated with

λ2

.

(A-1I)x1 =\begin{pmatrix}0&0&0&0&0\\ 3&0&0&0&0\\ 6&3&1&0&0\\ 10&6&3&1&0\\ 15&10&6&3&1 \end{pmatrix}\begin{pmatrix} 0\ 3\ -9\ 9\ -3 \end{pmatrix}=\begin{pmatrix} 0\ 0\ 0\ 0\ 0 \end{pmatrix}=0,

(A-1I)x2 =\begin{pmatrix}0&0&0&0&0\\ 3&0&0&0&0\\ 6&3&1&0&0\\ 10&6&3&1&0\\ 15&10&6&3&1 \end{pmatrix}\begin{pmatrix} 1\ -15\ 30\ -1\ -45 \end{pmatrix}=\begin{pmatrix} 0\ 3\ -9\ 9\ -3 \end{pmatrix}=x1,

(A-2I)y1 =\begin{pmatrix}-1&0&0&0&0\\ 3&-1&0&0&0\\ 6&3&0&0&0\\ 10&6&3&0&0\\ 15&10&6&3&0 \end{pmatrix}\begin{pmatrix} 0\ 0\ 0\ 0\ 9 \end{pmatrix}=\begin{pmatrix} 0\ 0\ 0\ 0\ 0 \end{pmatrix}=0,

(A-2I)y2=\begin{pmatrix}-1&0&0&0&0\\ 3&-1&0&0&0\\ 6&3&0&0&0\\ 10&6&3&0&0\\ 15&10&6&3&0 \end{pmatrix}\begin{pmatrix} 0\ 0\ 0\ 3\ 0 \end{pmatrix}=\begin{pmatrix} 0\ 0\ 0\ 0\ 9 \end{pmatrix}=y1,

(A-2I)y3=\begin{pmatrix}-1&0&0&0&0\\ 3&-1&0&0&0\\ 6&3&0&0&0\\ 10&6&3&0&0\\ 15&10&6&3&0 \end{pmatrix}\begin{pmatrix} 0\ 0\ 1\ -2\ 0 \end{pmatrix}=\begin{pmatrix} 0\ 0\ 0\ 3\ 0 \end{pmatrix}=y2.

This results in a basis for each of the generalized eigenspaces of

A

.Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors.

\left\{x1,x2\right\}= \left\{ \begin{pmatrix}0\ 3\ -9\ 9\ -3\end{pmatrix}, \begin{pmatrix}1\ -15\ 30\ -1\ -45\end{pmatrix}\right\}, \left\{y1,y2,y3\right\}= \left\{\begin{pmatrix}0\ 0\ 0\ 0\ 9\end{pmatrix}, \begin{pmatrix}0\ 0\ 0\ 3\ 0\end{pmatrix}, \begin{pmatrix}0\ 0\ 1\ -2\ 0\end{pmatrix} \right\}.

An "almost diagonal" matrix

J

in Jordan normal form, similar to

A

is obtained as follows:

M= \begin{pmatrix}x1&x2&y1&y2&y3\end{pmatrix}= \begin{pmatrix} 0&1&0&0&0\\ 3&-15&0&0&0\\ -9&30&0&0&1\\ 9&-1&0&3&-2\\ -3&-45&9&0&0 \end{pmatrix},

J=\begin{pmatrix} 1&1&0&0&0\\ 0&1&0&0&0\\ 0&0&2&1&0\\ 0&0&0&2&1\\ 0&0&0&0&2 \end{pmatrix},

where

M

is a generalized modal matrix for

A

, the columns of

M

are a canonical basis for

A

, and

AM=MJ

.

Jordan chains

Definition: Let

xm

be a generalized eigenvector of rank m corresponding to the matrix

A

and the eigenvalue

λ

. The chain generated by

xm

is a set of vectors

\left\{xm,xm-1,...,x1\right\}

given by

where

x1

is always an ordinary eigenvector with a given eigenvalue

λ

. Thus, in general,

The vector

xj

, given by, is a generalized eigenvector of rank j corresponding to the eigenvalue

λ

. A chain is a linearly independent set of vectors.

Canonical basis

Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.

Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors

xm-1,xm-2,\ldots,x1

that are in the Jordan chain generated by

xm

are also in the canonical basis.

Let

λi

be an eigenvalue of

A

of algebraic multiplicity

\mui

. First, find the ranks (matrix ranks) of the matrices

(A-λiI),(A-λiI)2,\ldots,(A-λi

mi
I)

. The integer

mi

is determined to be the first integer for which

(A-λi

mi
I)

has rank

n-\mui

(n being the number of rows or columns of

A

, that is,

A

is n × n).

Now define

\rhok=\operatorname{rank}(A-λiI)k-1-\operatorname{rank}(A-λiI)k    (k=1,2,\ldots,mi).

The variable

\rhok

designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue

λi

that will appear in a canonical basis for

A

. Note that

\operatorname{rank}(A-λiI)0=\operatorname{rank}(I)=n

.

Computation of generalized eigenvectors

In the preceding sections we have seen techniques for obtaining the

n

linearly independent generalized eigenvectors of a canonical basis for the vector space

V

associated with an

n x n

matrix

A

. These techniques can be combined into a procedure:

Solve the characteristic equation of

A

for eigenvalues

λi

and their algebraic multiplicities

\mui

;

For each

λi:

Determine

n-\mui

;

Determine

mi

;

Determine

\rhok

for

(k=1,\ldots,mi)

;

Determine each Jordan chain for

λi

;

Example 3

The matrix

A=\begin{pmatrix} 5&1&-2&4\\ 0&5&2&2\\ 0&0&5&3\\ 0&0&0&4 \end{pmatrix}

has an eigenvalue

λ1=5

of algebraic multiplicity

\mu1=3

and an eigenvalue

λ2=4

of algebraic multiplicity

\mu2=1

. We also have

n=4

. For

λ1

we have

n-\mu1=4-3=1

.

(A-5I)= \begin{pmatrix} 0&1&-2&4\\ 0&0&2&2\\ 0&0&0&3\\ 0&0&0&-1 \end{pmatrix},    \operatorname{rank}(A-5I)=3.

(A-5I)2= \begin{pmatrix} 0&0&2&-8\\ 0&0&0&4\\ 0&0&0&-3\\ 0&0&0&1 \end{pmatrix},    \operatorname{rank}(A-5I)2=2.

(A-5I)3= \begin{pmatrix} 0&0&0&14\\ 0&0&0&-4\\ 0&0&0&3\\ 0&0&0&-1 \end{pmatrix},    \operatorname{rank}(A-5I)3=1.

The first integer

m1

for which

(A-

m1
5I)
has rank

n-\mu1=1

is

m1=3

.

We now define

\rho3=\operatorname{rank}(A-5I)2-\operatorname{rank}(A-5I)3=2-1=1,

\rho2=\operatorname{rank}(A-5I)1-\operatorname{rank}(A-5I)2=3-2=1,

\rho1=\operatorname{rank}(A-5I)0-\operatorname{rank}(A-5I)1=4-3=1.

Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since

λ1

corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector

x3

of rank 3 corresponding to

λ1

such that

but

Equations and represent linear systems that can be solved for

x3

. Let

x3=\begin{pmatrix} x31\\ x32\\ x33\\ x34\end{pmatrix}.

Then

(A-5I)3x3=\begin{pmatrix} 0&0&0&14\\ 0&0&0&-4\\ 0&0&0&3\\ 0&0&0&-1 \end{pmatrix} \begin{pmatrix} x31\\ x32\\ x33\\ x34\end{pmatrix}=\begin{pmatrix} 14x34\\ -4x34\\ 3x34\\ -x34\end{pmatrix}=\begin{pmatrix} 0\\ 0\\ 0\\ 0 \end{pmatrix}

and

(A-5I)2x3=\begin{pmatrix} 0&0&2&-8\\ 0&0&0&4\\ 0&0&0&-3\\ 0&0&0&1 \end{pmatrix} \begin{pmatrix} x31\\ x32\\ x33\\ x34\end{pmatrix}=\begin{pmatrix} 2x33-8x34\\ 4x34\\ -3x34\\ x34\end{pmatrix}\ne\begin{pmatrix} 0\\ 0\\ 0\\ 0 \end{pmatrix}.

Thus, in order to satisfy the conditions and, we must have

x34=0

and

x33\ne0

. No restrictions are placed on

x31

and

x32

. By choosing

x31=x32=x34=0,x33=1

, we obtain

x3=\begin{pmatrix} 0\\ 0\\ 1\\ 0 \end{pmatrix}

as a generalized eigenvector of rank 3 corresponding to

λ1=5

. Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of

x31

,

x32

and

x33

, with

x33\ne0

. Our first choice, however, is the simplest.

Now using equations, we obtain

x2

and

x1

as generalized eigenvectors of rank 2 and 1, respectively, where

x2=(A-5I)x3=\begin{pmatrix} -2\\ 2\\ 0\\ 0 \end{pmatrix},

and

x1=(A-5I)x2=\begin{pmatrix} 2\\ 0\\ 0\\ 0 \end{pmatrix}.

λ2=4

can be dealt with using standard techniques and has an ordinary eigenvector

y1=\begin{pmatrix} -14\\ 4\\ -3\\ 1 \end{pmatrix}.

A canonical basis for

A

is

\left\{x3,x2,x1,y1\right\}= \left\{ \begin{pmatrix}0\ 0\ 1\ 0\end{pmatrix} \begin{pmatrix}-2\ 2\ 0\ 0\end{pmatrix} \begin{pmatrix}2\ 0\ 0\ 0\end{pmatrix} \begin{pmatrix}-14\ 4\ -3\ 1\end{pmatrix} \right\}.

x1,x2

and

x3

are generalized eigenvectors associated with

λ1

, while

y1

is the ordinary eigenvector associated with

λ2

.

This is a fairly simple example. In general, the numbers

\rhok

of linearly independent generalized eigenvectors of rank

k

will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue.

Generalized modal matrix

Let

A

be an n × n matrix. A generalized modal matrix

M

for

A

is an n × n matrix whose columns, considered as vectors, form a canonical basis for

A

and appear in

M

according to the following rules:

M

.

M

.

M

in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.).

Jordan normal form

See main article: Jordan normal form. Let

V

be an n-dimensional vector space; let

\phi

be a linear map in, the set of all linear maps from

V

into itself; and let

A

be the matrix representation of

\phi

with respect to some ordered basis. It can be shown that if the characteristic polynomial

f(λ)

of

A

factors into linear factors, so that

f(λ)

has the form

f(λ)=\pm(λ-

\mu1
λ
1)

(λ-

\mu2
λ
2)

(λ-

\mur
λ
r)

,

where

λ1,λ2,\ldots,λr

are the distinct eigenvalues of

A

, then each

\mui

is the algebraic multiplicity of its corresponding eigenvalue

λi

and

A

is similar to a matrix

J

in Jordan normal form, where each

λi

appears

\mui

consecutive times on the diagonal, and the entry directly above each

λi

(that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each

λi

is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix

J

is as close as one can come to a diagonalization of

A

. If

A

is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal.

Every n × n matrix

A

is similar to a matrix

J

in Jordan normal form, obtained through the similarity transformation

J=M-1AM

, where

M

is a generalized modal matrix for

A

. (See Note above.)

Example 4

Find a matrix in Jordan normal form that is similar to

A=\begin{pmatrix} 0&4&2\\ -3&8&3\\ 4&-8&-2 \end{pmatrix}.

Solution: The characteristic equation of

A

is

(λ-2)3=0

, hence,

λ=2

is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that

\operatorname{rank}(A-2I)=1

and

\operatorname{rank}(A-2I)2=0=n-\mu.

Thus,

\rho2=1

and

\rho1=2

, which implies that a canonical basis for

A

will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors

\left\{x2,x1\right\}

and one chain of one vector

\left\{y1\right\}

. Designating

M=\begin{pmatrix}y1&x1&x2\end{pmatrix}

, we find that

M=\begin{pmatrix} 2&2&0\\ 1&3&0\\ 0&-4&1 \end{pmatrix},

and

J=\begin{pmatrix} 2&0&0\\ 0&2&1\\ 0&0&2 \end{pmatrix},

where

M

is a generalized modal matrix for

A

, the columns of

M

are a canonical basis for

A

, and

AM=MJ

. Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both

M

and

J

may be interchanged, it follows that both

M

and

J

are not unique.

Example 5

In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix

A

. A generalized modal matrix for

A

is

M= \begin{pmatrix}y1&x1&x2&x3\end{pmatrix}= \begin{pmatrix} -14&2&-2&0\\ 4&0&2&0\\ -3&0&0&1\\ 1&0&0&0 \end{pmatrix}.

A matrix in Jordan normal form, similar to

A

is

J=\begin{pmatrix} 4&0&0&0\\ 0&5&1&0\\ 0&0&5&1\\ 0&0&0&5 \end{pmatrix},

so that

AM=MJ

.

Applications

Matrix functions

See main article: Matrix function. Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an n × n matrix

A

. If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If

A

is diagonalizable, that is

D=M-1AM,

with

D=\begin{pmatrix} λ1&0&&0\\ 0&λ2&&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&&λn \end{pmatrix},

then

Dk=\begin{pmatrix}

k
λ
1

&0&&0\\ 0&

k
λ
2

&&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&&

k \end{pmatrix}
λ
n

and the evaluation of the Maclaurin series for functions of

A

is greatly simplified. For example, to obtain any power k of

A

, we need only compute

Dk

, premultiply

Dk

by

M

, and postmultiply the result by

M-1

.

Using generalized eigenvectors, we can obtain the Jordan normal form for

A

and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.)

Differential equations

See main article: Ordinary differential equation. Consider the problem of solving the system of linear ordinary differential equations

where

x=\begin{pmatrix} x1(t)\\ x2(t)\\ \vdots\\ xn(t) \end{pmatrix}, x'=\begin{pmatrix} x1'(t)\\ x2'(t)\\ \vdots\\ xn'(t) \end{pmatrix},

and

A=(aij).

If the matrix

A

is a diagonal matrix so that

aij=0

for

i\nej

, then the system reduces to a system of n equations which take the form

In this case, the general solution is given by

x1=k1

a11t
e

x2=k2

a22t
e

\vdots

xn=kn

annt
e

.

In the general case, we try to diagonalize

A

and reduce the system to a system like as follows. If

A

is diagonalizable, we have

D=M-1AM

, where

M

is a modal matrix for

A

. Substituting

A=MDM-1

, equation takes the form

M-1x'=D(M-1x)

, or

where

The solution of is

y1=k1

λ1t
e

y2=k2

λ2t
e

\vdots

yn=kn

λnt
e

.

The solution

x

of is then obtained using the relation .

On the other hand, if

A

is not diagonalizable, we choose

M

to be a generalized modal matrix for

A

, such that

J=M-1AM

is the Jordan normal form of

A

. The system

y'=Jy

has the form

where the

λi

are the eigenvalues from the main diagonal of

J

and the

\epsiloni

are the ones and zeros from the superdiagonal of

J

. The system is often more easily solved than . We may solve the last equation in for

yn

, obtaining

yn=kn

λnt
e

. We then substitute this solution for

yn

into the next to last equation in and solve for

yn-1

. Continuing this procedure, we work through from the last equation to the first, solving the entire system for

y

. The solution

x

is then obtained using the relation .

Lemma:

Given the following chain of generalized eigenvectors of length

r,

X1=

λt
v
1e

X2=(tv1+v

λt
2)e

X3=\left(

t2
2

v1+tv2+v

λt
3\right)e

\vdots

Xr=\left(

tr-1
(r-1)!
v
1+...+t2
2

vr-2+tvr-1

λt
+v
r\right)e
,these functions solve the system of equations,

X'=AX.

Proof:

Define

λt
X
j(t)=e
jtj-i
(j-i)!
\sum
i=1

vi.

Then,
λt
X'
j(t)=e
jtj-i-1
(j-i-1)!
\sum
i=1
λt
v
i+e
jtj-i
(j-i)!
λ\sum
i=1

vi

.On the other hand we have
λt
AX
j(t)=e
jtj-i
(j-i)!
\sum
i=1

Avi

=eλ

jtj-i
(j-i)!
\sum
i=1

(vi-1vi)

=eλ

jtj-i
(j-i)!
\sum
i=1

vi-1+eλ

jtj-i
(j-i)!
λ\sum
i=1

vi

=eλ

jtj-i-1
(j-i-1)!
\sum
i=1

vi+eλ

jtj-i
(j-i)!
λ\sum
i=1

vi

=X'j(t)

as required.

References