Alternant matrix explained

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if

f1,f2,...,fn

are functions from a set

X

to a field

F

, and

{\alpha1,\alpha2,\ldots,\alpham}\inX

, then the alternant matrix has size

m x n

and is defined by

M=\begin{bmatrix} f1(\alpha1)&f2(\alpha1)&&fn(\alpha1)\\ f1(\alpha2)&f2(\alpha2)&&fn(\alpha2)\\ f1(\alpha3)&f2(\alpha3)&&fn(\alpha3)\\ \vdots&\vdots&\ddots&\vdots\\ f1(\alpham)&f2(\alpham)&&fn(\alpham)\\ \end{bmatrix}

or, more compactly,

Mij=fj(\alphai)

. (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which
j-1
f
j(\alpha)=\alpha
, and Moore matrices, for which
qj-1
f
j(\alpha)=\alpha
.

Properties

f1,f2,...,fn

in function space. For example, let

f2(x)=\cos(x)

and choose

\alpha1=0,\alpha2=\pi/2

. Then the alternant is the matrix

\left[\begin{smallmatrix}0&1\ 1&0\end{smallmatrix}\right]

and the alternant determinant is Therefore M is invertible and the vectors

\{\sin(x),\cos(x)\}

form a basis for their spanning set: in particular,

\sin(x)

and

\cos(x)

are linearly independent.

f2=\cos(x)

and choose

\alpha1=0,\alpha2=\pi

. Then the alternant is

\left[\begin{smallmatrix}0&1\ 0&-1\end{smallmatrix}\right]

and the alternant determinant is 0, but we have already seen that

\sin(x)

and

\cos(x)

are linearly independent.

f3(x)=

1
(x+1)(x+2)
and we obtain the alternant

\begin{bmatrix}1/2&1/3&1/6\ 1/3&1/4&1/12\ 1/4&1/5&1/20\end{bmatrix}\sim\begin{bmatrix}1&0&1\ 0&1&-1\ 0&0&0\end{bmatrix}

. Therefore,

(1,-1,-1)

is in the nullspace of the matrix: that is,

f1-f2-f3=0

. Moving

f3

to the other side of the equation gives the partial fraction decomposition

n=m

and

\alphai=\alphaj

for any then the alternant determinant is zero (as a row is repeated).

n=m

and the functions

fj(x)

are all polynomials, then

(\alphaj-\alphai)

divides the alternant determinant for all In particular, if V is a Vandermonde matrix, then \prod_ (\alpha_j - \alpha_i) = \det V divides such polynomial alternant determinants. The ratio \frac is therefore a polynomial in

\alpha1,\ldots,\alpham

called the bialternant. The Schur polynomial
s
(λ1,...,λn)
is classically defined as the bialternant of the polynomials

fj(x)=

λj
x
.

Applications

See also

References

. Thomas Muir . Thomas Muir (mathematician) . A treatise on the theory of determinants . 1960 . . 321–363 .

. A. C. Aitken . Alexander Aitken . Determinants and Matrices . 1956 . Oliver and Boyd Ltd . 111–123 .

. Richard P. Stanley . Richard P. Stanley . Enumerative Combinatorics . 1999 . . 334–342 .