Coordinate vector explained

In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis.[1] An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below.

Definition

Let V be a vector space of dimension n over a field F and let

B=\{b1,b2,\ldots,bn\}

be an ordered basis for V. Then for every

v\inV

there is a unique linear combination of the basis vectors that equals

v

:

v=\alpha1b1+\alpha2b2++\alphanbn.

The coordinate vector of

v

relative to B is the sequence of coordinates

[v]B=(\alpha1,\alpha2,\ldots,\alphan).

This is also called the representation of

v

with respect to B
, or the B representation of

v

. The

\alpha1,\alpha2,\ldots,\alphan

are called the coordinates of

v

. The order of the basis becomes important here, since it determines the order in which the coefficients are listed in the coordinate vector.

Coordinate vectors of finite-dimensional vector spaces can be represented by matrices as column or row vectors. In the above notation, one can write

[v]B=\begin{bmatrix}\alpha1\\vdots\\alphan\end{bmatrix}

and
T
[v]
B

=\begin{bmatrix}\alpha1&\alpha2&&\alphan\end{bmatrix}

where
T
[v]
B
is the transpose of the matrix

[v]B

.

The standard representation

We can mechanize the above transformation by defining a function

\phiB

, called the standard representation of V with respect to B, that takes every vector to its coordinate representation:

\phiB(v)=[v]B

. Then

\phiB

is a linear transformation from V to Fn. In fact, it is an isomorphism, and its inverse
-1
\phi
B

:Fn\toV

is simply
-1
\phi
B

(\alpha1,\ldots,\alphan)=\alpha1b1+ … +\alphanbn.

Alternatively, we could have defined

-1
\phi
B
to be the above function from the beginning, realized that
-1
\phi
B
is an isomorphism, and defined

\phiB

to be its inverse.

Examples

Example 1

Let

P3

be the space of all the algebraic polynomials of degree at most 3 (i.e. the highest exponent of x can be 3). This space is linear and spanned by the following polynomials:

BP=\left\{1,x,x2,x3\right\}

matching

1:=\begin{bmatrix}1\ 0\ 0\ 0\end{bmatrix}; x:=\begin{bmatrix}0\ 1\ 0\ 0\end{bmatrix}; x2:=\begin{bmatrix}0\ 0\ 1\ 0\end{bmatrix}; x3:=\begin{bmatrix}0\ 0\ 0\ 1\end{bmatrix}

then the coordinate vector corresponding to the polynomial

p\left(x\right)=a0+a1x+a2x2+a3x3

is

\begin{bmatrix}a0\a1\a2\a3\end{bmatrix}.

According to that representation, the differentiation operator d/dx which we shall mark D will be represented by the following matrix:

Dp(x)=P'(x);[D]=\begin{bmatrix} 0&1&0&0\\ 0&0&2&0\\ 0&0&0&3\\ 0&0&0&0\\ \end{bmatrix}

Using that method it is easy to explore the properties of the operator, such as: invertibility, Hermitian or anti-Hermitian or neither, spectrum and eigenvalues, and more.

Example 2

The Pauli matrices, which represent the spin operator when transforming the spin eigenstates into vector coordinates.

Basis transformation matrix

Let B and C be two different bases of a vector space V, and let us mark with

\lbrackM

B
\rbrack
C
the matrix which has columns consisting of the C representation of basis vectors b1, b2, …, bn:

\lbrack

B
M\rbrack
C

=\begin{bmatrix}\lbrackb1\rbrackC&&\lbrackbn\rbrackC\end{bmatrix}

This matrix is referred to as the basis transformation matrix from B to C. It can be regarded as an automorphism over

Fn

. Any vector v represented in B can be transformed to a representation in C as follows:

\lbrackv\rbrackC=\lbrack

B
M\rbrack
C

\lbrackv\rbrackB.

Under the transformation of basis, notice that the superscript on the transformation matrix, M, and the subscript on the coordinate vector, v, are the same, and seemingly cancel, leaving the remaining subscript. While this may serve as a memory aid, it is important to note that no such cancellation, or similar mathematical operation, is taking place.

Corollary

The matrix M is an invertible matrix and M−1 is the basis transformation matrix from C to B. In other words,

\begin{align} \operatorname{Id}&=\lbrack

B
M\rbrack
C

\lbrack

C
M\rbrack
B

=\lbrack

C
M\rbrack
C

\\[3pt] &=\lbrack

C
M\rbrack
B

\lbrack

B
M\rbrack
C

=\lbrack

B \end{align}
M\rbrack
B

Infinite-dimensional vector spaces

Suppose V is an infinite-dimensional vector space over a field F. If the dimension is κ, then there is some basis of κ elements for V. After an order is chosen, the basis can be considered an ordered basis. The elements of V are finite linear combinations of elements in the basis, which give rise to unique coordinate representations exactly as described before. The only change is that the indexing set for the coordinates is not finite. Since a given vector v is a finite linear combination of basis elements, the only nonzero entries of the coordinate vector for v will be the nonzero coefficients of the linear combination representing v. Thus the coordinate vector for v is zero except in finitely many entries.

The linear transformations between (possibly) infinite-dimensional vector spaces can be modeled, analogously to the finite-dimensional case, with infinite matrices. The special case of the transformations from V into V is described in the full linear ring article.

See also

Notes and References

  1. Book: Howard Anton. Chris Rorres. Elementary Linear Algebra: Applications Version. 12 April 2010. John Wiley & Sons. 978-0-470-43205-1.