Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.[1]
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry).
In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above.
See main article: Symmetry (geometry). The types of symmetry considered in basic geometry include reflectional symmetry, rotation symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry).
See main article: Even and odd functions.
Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f:
f(x)=f(-x)
Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include, x2, x4, cos(x), and cosh(x).
Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f:
-f(x)=f(-x)
That is,
f(x)+f(-x)=0.
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x).
The integral of an odd function from -A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between -A and A).[2]
The integral of an even function from -A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between -A and A).[2] This also holds true when A is infinite, but only if the integral converges.
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if
A=AT.
By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji, for all indices i and j.
For example, the following 3×3 matrix is symmetric:
\begin{bmatrix} 1&7&3\\ 7&4&-5\\ 3&-5&6\end{bmatrix}
Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
See main article: Symmetric group. The symmetric group Sn (on a finite set of n symbols) is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself.[3] Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (i.e., the number of elements) of the symmetric group Sn is n!.
See main article: Symmetric polynomial. A symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n, one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
In two variables X1 and X2, one has symmetric polynomials such as:
3+ | |
X | |
1 |
3-7 | |
X | |
2 |
4
2 | |
X | |
2 |
3X | |
+X | |
2 |
+X1X
3 | |
2 |
+(X1+X
4 | |
2) |
X1X2X3-2X1X2-2X1X3-2X2X3
See main article: Symmetric tensor. In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments:
T(v1,v2,...,vr)=T(v\sigma,v\sigma,...,v\sigma)
T | |
i1i2...ir |
=
T | |
i\sigmai\sigma...i\sigma |
.
The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
See main article: Galois theory. Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that . The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations.
See main article: Automorphism. In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives.
This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state
\scriptstyle|x\rangle
\scriptstyle|y\rangle
|\psi\rangle=\sumx,yA(x,y)|x,y\rangle
and antisymmetry under exchange means that . This implies that, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity is not a matrix but an antisymmetric rank-two tensor.
Conversely, if the diagonal quantities are zero in every basis, then the wavefunction component:
A(x,y)=\langle\psi|x,y\rangle=\langle\psi|(|x\rangle ⊗ |y\rangle)
is necessarily antisymmetric. To prove it, consider the matrix element:
\langle\psi|((|x\rangle+|y\rangle) ⊗ (|x\rangle+|y\rangle))
This is zero, because the two particles have zero probability to both be in the superposition state
\scriptstyle|x\rangle+|y\rangle
\langle\psi|x,x\rangle+\langle\psi|x,y\rangle+\langle\psi|y,x\rangle+\langle\psi|y,y\rangle
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
\langle\psi|x,y\rangle+\langle\psi|y,x\rangle=0
or
A(x,y)=-A(y,x)
See main article: Symmetric relation.
We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A.Note that symmetry is not the exact opposite of antisymmetry.
See main article: Isometry. An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry.
Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc.[6]
A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation.
A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order.[7]
For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration.
Symmetries may be found by solving a related set of ordinary differential equations. Solving these equations is often much simpler than solving the original differential equations.
In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution.
In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution.
In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry.
There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero.
A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely.
For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.