In mathematics, a symmetric tensor is a tensor that is invariant under a permutation of its vector arguments:
T(v1,v2,\ldots,vr)=T(v\sigma,v\sigma,\ldots,v\sigma)
T | |
i1i2 … ir |
=
T | |
i\sigmai\sigma … i\sigma |
.
The space of symmetric tensors of order r on a finite-dimensional vector space V is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
Let V be a vector space and
T\inV ⊗
\tau\sigmaT=T
Given a basis of V, any symmetric tensor T of rank k can be written as
T=
N | |
\sum | |
i1,\ldots,ik=1 |
T | |
i1i2 … ik |
i1 | |
e |
⊗
i2 | |
e |
⊗ … ⊗
ik | |
e |
for some unique list of coefficients
T | |
i1i2 … ik |
T | |
i\sigmai\sigma … i\sigma |
=
T | |
i1i2 … ik |
for every permutation σ.
The space of all symmetric tensors of order k defined on V is often denoted by Sk(V) or Symk(V). It is itself a vector space, and if V has dimension N then the dimension of Symk(V) is the binomial coefficient
\dim\operatorname{Sym}k(V)={N+k-1\choosek}.
We then construct Sym(V) as the direct sum of Symk(V) for k = 0,1,2,...
\operatorname{Sym}(V)=
infty | |
oplus | |
k=0 |
\operatorname{Sym}k(V).
There are many examples of symmetric tensors. Some include, the metric tensor,
g\mu\nu
G\mu\nu
R\mu\nu
Many material properties and fields used in physics and engineering can be represented as symmetric tensor fields; for example: stress, strain, and anisotropic conductivity. Also, in diffusion MRI one often uses symmetric tensors to describe diffusion in the brain or other parts of the body.
Ellipsoids are examples of algebraic varieties; and so, for general rank, symmetric tensors, in the guise of homogeneous polynomials, are used to define projective varieties, and are often studied as such.
(M,g)
\nabla
Rijk\ell\in(T*M) ⊗
Rij=Rk\ell
Rjik\ell=-Rijk\ell=Rij\ell
Suppose
V
k
T
\operatorname{Sym}T=
1 | |
k! |
\sum\sigma\inak{Sk}\tau\sigmaT,
T=
T | |
i1i2 … ik |
i1 | |
e |
⊗
i2 | |
e |
⊗ … ⊗
ik | |
e |
,
\operatorname{Sym}T=
1 | |
k! |
\sum\sigma\ink}
T | |
i\sigmai\sigma … i\sigma |
i1 | |
e |
⊗
i2 | |
e |
⊗ … ⊗
ik | |
e |
.
The components of the tensor appearing on the right are often denoted by
T | |
(i1i2 … ik) |
=
1 | |
k! |
\sum\sigma\ink}
T | |
i\sigmai\sigma … i\sigma |
with parentheses around the indices being symmetrized. Square brackets [] are used to indicate anti-symmetrization.
If T is a simple tensor, given as a pure tensor product
T=v1 ⊗ v2 ⊗ … ⊗ vr
v1\odotv2\odot … \odotvr:=
1 | |
r! |
\sum\sigma\inak{Sr}v\sigma ⊗ v\sigma ⊗ … ⊗ v\sigma.
In general we can turn Sym(V) into an algebra by defining the commutative and associative product ⊙.[2] Given two tensors and, we use the symmetrization operator to define:
T1\odotT2=\operatorname{Sym}(T1 ⊗
k1+k2 | |
T | |
2) \left(\in\operatorname{Sym} |
(V)\right).
In some cases an exponential notation is used:
v\odot=\underbrace{v\odotv\odot … \odotv}ktimes=\underbrace{v ⊗ v ⊗ … ⊗ v}ktimes=v ⊗ .
k=\underbrace{vv … v} | |
v | |
ktimes |
=\underbrace{v\odotv\odot … \odotv}ktimes.
In analogy with the theory of symmetric matrices, a (real) symmetric tensor of order 2 can be "diagonalized". More precisely, for any tensor T ∈ Sym2(V), there is an integer r, non-zero unit vectors v1,...,vr ∈ V and weights λ1,...,λr such that
T=
r | |
\sum | |
i=1 |
λivi ⊗ vi.
For symmetric tensors of arbitrary order k, decompositions
T=
r | |
\sum | |
i=1 |
λi
⊗ k | |
v | |
i |