In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.[1]
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an -dimensional space is represented by a one-dimensional array with components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order tensor could be denoted , where and are indices running from to, or also by . Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while and can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an -dimensional array or an -way array. The total number of indices is also called the order, degree or rank of a tensor,[2] [3] [4] although the term "rank" generally has another meaning in the context of matrices and tensors.
\hat{e
ej
\hat{e
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article.[5] The components vi of a column vector v transform with the inverse of the matrix R,
\hat{v}i=\left(R-1
i | |
\right) | |
j |
vj,
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself,
\hat{w}i=wj
j | |
R | |
i |
.
This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array
T
R=
j | |
\left(R | |
i\right) |
\hat{T}=R-1TR
i' | |
\hat{T} | |
j' |
=\left(R-1
i' | |
\right) | |
i |
i | |
T | |
j |
j | |
R | |
j' |
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
v=\hat{v}i\hat{e
where
k | |
\delta | |
j |
{v}i{e
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components
(Tv)i
(Tv)i=
i | |
T | |
j |
vj
\left(\widehat{Tv}\right)i'=
i' | |
\hat{T} | |
j' |
\hat{v}j'=\left[\left(R-1
i' | |
\right) | |
i |
i | |
T | |
j |
j | |
R | |
j' |
\right]\left[\left(R-1
j' | |
\right) | |
k |
vk\right]=\left(R-1
i' | |
\right) | |
i |
(Tv)i.
The transformation law for an order tensor with p contravariant indices and q covariant indices is thus given as,
i'1,\ldots,i'p | |
\hat{T} | |
j'1,\ldots,j'q |
=\left(R-1
i'1 | |
\right) | |
i1 |
… \left(R-1
i'p | |
\right) | |
ip |
i1,\ldots,ip | |
T | |
j1,\ldots,jq |
j1 | |
R | |
j'1 |
…
jq | |
R | |
j'q |
.
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type . The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type is also called a -tensor for short.
This discussion motivates the following formal definition:[6]
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If
f=(f1,...,fn)
R=
i | |
\left(R | |
j\right) |
n x n
fR=\left(fi
i | |
R | |
1, |
...,fi
i | |
R | |
n\right). |
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let
\rho
\rho:GL(n)\toGL(W)
\rho
T:F\toW
T(FR)=\rho\left(R-1\right)T(F).
When
\rho
See main article: Multilinear map. A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type tensor T is defined as a multilinear map,
T:\underbrace{V* x ... x
*} | |
V | |
pcopies |
x \underbrace{V x ... x V}q → R,
where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, . More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing as the codomain of the multilinear maps.
By applying a multilinear map T of type to a basis for V and a canonical cobasis for V∗,
i1...ip | |
T | |
j1...jq |
\equiv
i1 | |
T\left(\boldsymbol{\varepsilon} |
,
ip | |
\ldots,\boldsymbol{\varepsilon} |
,
e | |
j1 |
,\ldots,
e | |
jq |
\right),
a -dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.
See main article: Tensor (intrinsic definition). For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here.
A type tensor is defined in this context as an element of the tensor product of vector spaces,[7]
T\in\underbrace{V ⊗ ... ⊗ V}pcopies ⊗ \underbrace{V* ⊗ ... ⊗
*} | |
V | |
qcopies |
.
A basis of and basis of naturally induce a basis of the tensor product . The components of a tensor are the coefficients of the tensor with respect to the basis obtained from a basis
BiCi=B1C1+B2C2+ … +BnCn