Gram matrix explained

In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors

v1,...,vn

in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product

Gij=\left\langlevi,vj\right\rangle

.[1] If the vectors

v1,...,vn

are the columns of matrix

X

then the Gram matrix is

X\daggerX

in the general case that the vector coordinates are complex numbers, which simplifies to

X\topX

for the case that the vector coordinates are real numbers.

An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.

It is named after Jørgen Pedersen Gram.

Examples

For finite-dimensional real vectors in

Rn

with the usual Euclidean dot product, the Gram matrix is

G=V\topV

, where

V

is a matrix whose columns are the vectors

vk

and

V\top

is its transpose whose rows are the vectors
\top
v
k
. For complex vectors in

Cn

,

G=V\daggerV

, where

V\dagger

is the conjugate transpose of

V

.

Given square-integrable functions

\{\elli(),i=1,...,n\}

on the interval

\left[t0,tf\right]

, the Gram matrix

G=\left[Gij\right]

is:

Gij=

tf
\int
t0
*(\tau)\ell
\ell
j(\tau)

d\tau.

where
*(\tau)
\ell
i
is the complex conjugate of

\elli(\tau)

.

B

on a finite-dimensional vector space over any field we can define a Gram matrix

G

attached to a set of vectors

v1,...,vn

by

Gij=B\left(vi,vj\right)

. The matrix will be symmetric if the bilinear form

B

is symmetric.

Applications

k

-dimensional Riemannian manifold

M\subsetRn

and a parametrization

\phi:U\toM

for the volume form

\omega

on

M

induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: \omega = \sqrt\ dx_1 \cdots dx_k,\quad G = \left[\left\langle \frac{\partial\phi}{\partial x_i},\frac{\partial\phi}{\partial x_j}\right\rangle\right]. This generalizes the classical surface integral of a parametrized surface

\phi:U\toS\subsetR3

for

(x,y)\inU\subsetR2

: \int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac\,\,\frac\right|\, dx\, dy.

Properties

Positive-semidefiniteness

The Gram matrix is symmetric in the case the inner product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.

The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

x\daggerGx= \sumi,j

*
x
i

xj\left\langlevi,vj\right\rangle= \sumi,j\left\langlexivi,xjvj\right\rangle= l\langle\sumixivi,\sumjxjvjr\rangle= l\|\sumixivir\|2\geq0.

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product.Note that this also shows that the Gramian matrix is positive definite if and only if the vectors

vi

are linearly independent (that is, \sum_i x_i v_i \neq 0 for all

x

).[1]

Finding a vector realization

Given any positive semidefinite matrix

M

, one can decompose it as:

M=B\daggerB

,

where

B\dagger

is the conjugate transpose of

B

(or

M=Bsf{T}B

in the real case).

Here

B

is a

k x n

matrix, where

k

is the rank of

M

. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of

M

.

The columns

b(1),...,b(n)

of

B

can be seen as n vectors in

Ck

(or k-dimensional Euclidean space

Rk

, in the real case). Then

Mij=b(i)b(j)

where the dot product a \cdot b = \sum_^k a_\ell^* b_\ell is the usual inner product on

Ck

.

M

is positive semidefinite if and only if it is the Gram matrix of some vectors

b(1),...,b(n)

. Such vectors are called a vector realization of The infinite-dimensional analog of this statement is Mercer's theorem.

Uniqueness of vector realizations

If

M

is the Gram matrix of vectors

v1,...,vn

in

Rk

then applying any rotation or reflection of

Rk

(any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any

k x k

orthogonal matrix

Q

, the Gram matrix of

Qv1,...,Qvn

is also

This is the only way in which two real vector realizations of

M

can differ: the vectors

v1,...,vn

are unique up to orthogonal transformations. In other words, the dot products

vivj

and

wiwj

are equal if and only if some rigid transformation of

Rk

transforms the vectors

v1,...,vn

to

w1,...,wn

and 0 to 0.

The same holds in the complex case, with unitary transformations in place of orthogonal ones.That is, if the Gram matrix of vectors

v1,...,vn

is equal to the Gram matrix of vectors

w1,...,wn

in

Ck

then there is a unitary

k x k

matrix

U

(meaning

U\daggerU=I

) such that

vi=Uwi

for

i=1,...,n

.[3]

Other properties

G=G\dagger

, it is necessarily the case that

G

and

G\dagger

commute. That is, a real or complex Gram matrix

G

is also a normal matrix.

Rk

or

Ck

equals the dimension of the space spanned by these vectors.[1]

Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:\bigl|G(v_1, \dots, v_n)\bigr| = \begin \langle v_1,v_1\rangle & \langle v_1,v_2\rangle &\dots & \langle v_1,v_n\rangle \\ \langle v_2,v_1\rangle & \langle v_2,v_2\rangle &\dots & \langle v_2,v_n\rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle v_n,v_1\rangle & \langle v_n,v_2\rangle &\dots & \langle v_n,v_n\rangle\end.

If

v1,...,vn

are vectors in

Rm

then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When the determinant and volume are zero. When, this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is .

The Gram determinant can also be expressed in terms of the exterior product of vectors by

l|G(v1,...,vn)r|=\|v1\wedge\wedge

2.
v
n\|

When the vectors

v1,\ldots,vn\inRm

are defined from the positions of points

p1,\ldots,pn

relative to some reference point

pn+1

,

(v_1, v_2, \ldots, v_n) = (p_1 - p_, p_2 - p_, \ldots, p_n - p_)\,,then the Gram determinant can be written as the difference of two Gram determinants,

\bigl|G(v_1, \dots, v_n)\bigr| = \bigl|G((p_1, 1), \dots, (p_, 1))\bigr| - \bigl|G(p_1, \dots, p_)\bigr|\,,where each

(pj,1)

is the corresponding point

pj

supplemented with the coordinate value of 1 for an

(m+1)

-st dimension. Note that in the common case that, the second term on the right-hand side will be zero.

Constructing an orthonormal basis

Given a set of linearly independent vectors

\{vi\}

with Gram matrix

G

defined by

Gij:=\langlevi,vj\rangle

, one can construct an orthonormal basis

ui:=\sumjl(G-1/2r)jivj.

In matrix notation,

U=VG-1/2

, where

U

has orthonormal basis vectors

\{ui\}

and the matrix

V

is composed of the given column vectors

\{vi\}

.

The matrix

G-1/2

is guaranteed to exist. Indeed,

G

is Hermitian, and so can be decomposed as

G=UDU\dagger

with

U

a unitary matrix and

D

a real diagonal matrix. Additionally, the

vi

are linearly independent if and only if

G

is positive definite, which implies that the diagonal entries of

D

are positive.

G-1/2

is therefore uniquely defined by

G-1/2:=UD-1/2U\dagger

. One can check that these new vectors are orthonormal:

\begin{align} \langleui,uj\rangle &=\sumi'\sumj'l\langlel(G-1/2r)i'ivi',l(G-1/2r)j'jvj'r\rangle\\[10mu] &=\sumi'\sumj'l(G-1/2r)ii'Gi'j'l(G-1/2r)j'j\\[8mu] &=l(G-1/2GG-1/2r)ij=\deltaij\end{align}

where we used

l(G-1/2r)\dagger=G-1/2

.

See also

References

External links

Notes and References

  1. , p.441, Theorem 7.2.10
  2. Lanckriet . G. R. G. . N. . Cristianini . P. . Bartlett . L. E. . Ghaoui . M. I. . Jordan . Learning the kernel matrix with semidefinite programming . Journal of Machine Learning Research . 5 . 2004 . 27–72 [p. 29] .
  3. , p. 452, Theorem 7.3.11