In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, is an integer factorization of, and is a polynomial factorization of .
Factorization is not usually considered meaningful within number systems possessing division, such as the real or complex numbers, since any
x
(xy) x (1/y)
y
Factorization was first considered by ancient Greek mathematicians in the case of integers. They proved the fundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product of prime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is unique up to the order of the factors. Although integer factorization is a sort of inverse to multiplication, it is much more difficult algorithmically, a fact which is exploited in the RSA cryptosystem to implement public-key cryptography.
Polynomial factorization has also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding its roots to finding the roots of the factors. Polynomials with coefficients in the integers or in a field possess the unique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced by irreducible polynomials. In particular, a univariate polynomial with complex coefficients admits a unique (up to ordering) factorization into linear polynomials: this is a version of the fundamental theorem of algebra. In this case, the factorization can be done with root-finding algorithms. The case of polynomials with integer coefficients is fundamental for computer algebra. There are efficient computer algorithms for computing (complete) factorizations within the ring of polynomials with rational number coefficients (see factorization of polynomials).
A commutative ring possessing the unique factorization property is called a unique factorization domain. There are number systems, such as certain rings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property of Dedekind domains: ideals factor uniquely into prime ideals.
Factorization may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix with all diagonal entries equal to one, an upper triangular matrix, and a permutation matrix ; this is a matrix formulation of Gaussian elimination.
See main article: Integer factorization.
By the fundamental theorem of arithmetic, every integer greater than 1 has a unique (up to the order of the factors) factorization into prime numbers, which are those integers which cannot be further factorized into the product of integers greater than one.
For computing the factorization of an integer, one needs an algorithm for finding a divisor of or deciding that is prime. When such a divisor is found, the repeated application of this algorithm to the factors and gives eventually the complete factorization of .[1]
For finding a divisor of, if any, it suffices to test all values of such that and . In fact, if is a divisor of such that, then is a divisor of such that .
If one tests the values of in increasing order, the first divisor that is found is necessarily a prime number, and the cofactor cannot have any divisor smaller than . For getting the complete factorization, it suffices thus to continue the algorithm by searching a divisor of that is not smaller than and not greater than .
There is no need to test all values of for applying the method. In principle, it suffices to test only prime divisors. This needs to have a table of prime numbers that may be generated for example with the sieve of Eratosthenes. As the method of factorization does essentially the same work as the sieve of Eratosthenes, it is generally more efficient to test for a divisor only those numbers for which it is not immediately clear whether they are prime or not. Typically, one may proceed by testing 2, 3, 5, and the numbers > 5, whose last digit is 1, 3, 7, 9 and the sum of digits is not a multiple of 3.
1+
25 | |
2 |
=1+232=4294967297
There are more efficient factoring algorithms. However they remain relatively inefficient, as, with the present state of the art, one cannot factorize, even with the more powerful computers, a number of 500 decimal digits that is the product of two randomly chosen prime numbers. This ensures the security of the RSA cryptosystem, which is widely used for secure internet communication.
For factoring into primes:
.
Manipulating expressions is the basis of algebra. Factorization is one of the most important methods for expression manipulation for several reasons. If one can put an equation in a factored form, then the problem of solving the equation splits into two independent (and generally easier) problems and . When an expression can be factored, the factors are often much simpler, and may thus offer some insight on the problem. For example,
x3-ax2-bx2-cx2+abx+acx+bcx-abc
(x-a)(x-b)(x-c),
On the other hand, factorization is not always possible, and when it is possible, the factors are not always simpler. For example,
x10-1
x-1
x9+x8+ … +x2+x+1
Various methods have been developed for finding factorizations; some are described below.
Solving algebraic equations may be viewed as a problem of polynomial factorization. In fact, the fundamental theorem of algebra can be stated as follows: every polynomial in of degree with complex coefficients may be factorized into linear factors
x-ai,
The systematic use of algebraic manipulations for simplifying expressions (more specifically equations) may be dated to 9th century, with al-Khwarizmi's book The Compendious Book on Calculation by Completion and Balancing, which is titled with two such types of manipulation.
However, even for solving quadratic equations, the factoring method was not used before Harriot's work published in 1631, ten years after his death.[2] In his book Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas, Harriot drew tables for addition, subtraction, multiplication and division of monomials, binomials, and trinomials. Then, in a second section, he set up the equation, and showed that this matches the form of multiplication he had previously provided, giving the factorization .[3]
The following methods apply to any expression that is a sum, or that may be transformed into a sum. Therefore, they are most often applied to polynomials, though they also may be applied when the terms of the sum are not monomials, that is, the terms of the sum are a product of variables and constants.
It may occur that all terms of a sum are products and that some factors are common to all terms. In this case, the distributive law allows factoring out this common factor. If there are several such common factors, it is preferable to divide out the greatest such common factor. Also, if there are integer coefficients, one may factor out the greatest common divisor of these coefficients.
For example,since 2 is the greatest common divisor of 6, 8, and 10, and
x3y2
Grouping terms may allow using other methods for getting a factorization.
For example, to factor one may remark that the first two terms have a common factor, and the last two terms have the common factor . ThusThen a simple inspection shows the common factor, leading to the factorization
In general, this works for sums of 4 terms that have been obtained as the product of two binomials. Although not frequently, this may work also for more complicated examples.
Sometimes, some term grouping reveals part of a recognizable pattern. It is then useful to add and subtract terms to complete the pattern.
A typical use of this is the completing the square method for getting the quadratic formula.
Another example is the factorization of
x4+1.
2x2,
2x2
x4+1,
12\equiv-2\pmod3;
22\equiv-1\pmod5;
32\equiv2\pmod7.
Many identities provide an equality between a sum and a product. The above methods may be used for letting the sum side of some identity appear in an expression, which may therefore be replaced by a product.
Below are identities whose left-hand sides are commonly used as patterns (this means that the variables and that appear in these identities may represent any subexpression of the expression that has to be factorized).
E2-F2=(E+F)(E-F)
For example,
\begin{align} a2+&2ab+b2-x2+2xy-y2\\ &=(a2+2ab+b2)-(x2-2xy+y2)\\ &=(a+b)2-(x-y)2\\ &=(a+b+x-y)(a+b-x+y). \end{align}
E3+F3=(E+F)(E2-EF+F2)
E3-F3=(E-F)(E2+EF+F2)
\begin{align} E4-F4&=(E2+F2)(E2-F2)\\ &=(E2+F2)(E+F)(E-F) \end{align}
E2n-F2n=(En+Fn)(En-Fn)
En-Fn=(E-F)(En-1+En-2F+En-3F2+ … +EFn-2+Fn-1)
This is an example showing that the factors may be much larger than the sum that is factorized.
En+Fn=(E+F)(En-1-En-2F+En-3F2- … -EFn-2+Fn-1)
(obtained by changing by in the preceding formula)
(Eq)p+(Fq)p.
\begin{align} &x2+y2+z2+2(xy+yz+xz)=(x+y+z)2\\ &x3+y3+z3-3xyz=(x+y+z)(x2+y2+z2-xy-xz-yz)\\ &x3+y3+z3+3x2(y+z)+3y2(x+z)+3z2(x+y)+6xyz=(x+y+z)3\\ &x4+x2y2+y4=(x2+xy+y2)(x2-xy+y2). \end{align}
a2+2ab+b2=(a+b)2
a2-2ab+b2=(a-b)2
a3+3a2b+3ab2+b3=(a+b)3
a3-3a2b+3ab2-b3=(a-b)3
More generally, the coefficients of the expanded forms of
(a+b)n
(a-b)n
The th roots of unity are the complex numbers each of which is a root of the polynomial
xn-1.
k=0,\ldots,n-1.
It follows that for any two expressions and, one has:
If and are real expressions, and one wants real factors, one has to replace every pair of complex conjugate factors by its product. As the complex conjugate of
ei\alpha
e-i\alpha,
The cosines that appear in these factorizations are algebraic numbers, and may be expressed in terms of radicals (this is possible because their Galois group is cyclic); however, these radical expressions are too complicated to be used, except for low values of . For example,
Often one wants a factorization with rational coefficients. Such a factorization involves cyclotomic polynomials. To express rational factorizations of sums and differences or powers, we need a notation for the homogenization of a polynomial: if
n-1 | |
P(x)=a | |
ix |
+ … +an,
\overline
n-1 | |
P(x,y)=a | |
ix |
y+ …
n. | |
+a | |
ny |
Qn(x)
For example, since the divisors of 6 are 1, 2, 3, 6, and the divisors of 12 that do not divide 6 are 4 and 12.
See main article: Factorization of polynomials. For polynomials, factorization is strongly related with the problem of solving algebraic equations. An algebraic equation has the form
P(x) \stackrel{def
a0\ne0.
P(r)=0.
If
P(x)=Q(x)R(x)
Conversely, the factor theorem asserts that, if is a root of, then may be factored as
P(x)=(x-r)Q(x),
If the coefficients of are real or complex numbers, the fundamental theorem of algebra asserts that has a real or complex root. Using the factor theorem recursively, it results that
P(x)=a0(x-r1) … (x-rn),
r1,\ldots,rn
If the coefficients of are real, one generally wants a factorization where factors have real coefficients. In this case, the complete factorization may have some quadratic (degree two) factors. This factorization may easily be deduced from the above complete factorization. In fact, if is a non-real root of, then its complex conjugate is also a root of . So, the product
(x-r)(x-s)=x2-(r+s)x+rs=x2-2ax+a2+b2
For computing these real or complex factorizations, one needs the roots of the polynomial, which may not be computed exactly, and only approximated using root-finding algorithms.
In practice, most algebraic equations of interest have integer or rational coefficients, and one may want a factorization with factors of the same kind. The fundamental theorem of arithmetic may be generalized to this case, stating that polynomials with integer or rational coefficients have the unique factorization property. More precisely, every polynomial with rational coefficients may be factorized in a product
P(x)=qP1(x) … Pk(x),
P1(x),\ldots,Pk(x)
Pi(x)
There are efficient algorithms for computing this factorization, which are implemented in most computer algebra systems. See Factorization of polynomials. Unfortunately, these algorithms are too complicated to use for paper-and-pencil computations. Besides the heuristics above, only a few methods are suitable for hand computations, which generally work only for polynomials of low degree, with few nonzero coefficients. The main such methods are described in next subsections.
Every polynomial with rational coefficients, may be factorized, in a unique way, as the product of a rational number and a polynomial with integer coefficients, which is primitive (that is, the greatest common divisor of the coefficients is 1), and has a positive leading coefficient (coefficient of the term of the highest degree). For example:
-10x2+5x+5=(-5) ⋅ (2x2-x-1)
1 | |
3 |
x5+
7 | |
2 |
x2+2x+1=
1 | |
6 |
(2x5+21x2+12x+6)
In this factorization, the rational number is called the content, and the primitive polynomial is the primitive part. The computation of this factorization may be done as follows: firstly, reduce all coefficients to a common denominator, for getting the quotient by an integer of a polynomial with integer coefficients. Then one divides out the greater common divisor of the coefficients of this polynomial for getting the primitive part, the content being
p/q.
This factorization may produce a result that is larger than the original polynomial (typically when there are many coprime denominators), but, even when this is the case, the primitive part is generally easier to manipulate for further factorization.
See main article: Factor theorem.
The factor theorem states that, if is a root of a polynomial
n-1 | |
P(x)=a | |
1x |
+ … +an-1x+an,
P(x)=(x-r)Q(x),
n-1 | |
Q(x)=b | |
0x |
+ … +bn-2x+bn-1,
a0=b0
bi=a
i | |
0r |
+ … +ai-1r+ai for i=1,\ldots,n{-}1.
For example, for
P(x)=x3-3x+2,
r2+0r-3=-2,
x3-3x+2=(x-1)(x2+x-2).
For polynomials with rational number coefficients, one may search for roots which are rational numbers. Primitive part-content factorization (see above) reduces the problem of searching for rational roots to the case of polynomials with integer coefficients having no non-trivial common divisor.
If
x=\tfracpq
n-1 | |
P(x)=a | |
1x |
+ … +an-1x+an,
P(x)=(qx-p)Q(x),
x-p/q
Comparing the coefficients of degree and the constant coefficients in the above equality shows that, if
\tfracpq
a0,
an.
For example, if the polynomial
P(x)=2x3-7x2+10x-6
\tfracpq
p\in\{\pm1,\pm2,\pm3,\pm6\},
q\in\{1,2\}.
\tfracpq\in\{1,2,3,6,\tfrac12,\tfrac32\}.
\tfrac32
2x3-7x2+10x-6=(2x-3)(x2-2x+2).
The above method may be adapted for quadratic polynomials, leading to the ac method of factorization.[4]
Consider the quadratic polynomial
P(x)=ax2+bx+c
r1=\tfracra.
r2
r2=-
ba | |
- |
r1=-
| |||||||
= |
sa, | |
s=-(b+r).
r1
r | ||||
|
| |||||||
rs=ac and r+s=-b.
In summary, if
ax2+bx+c
rs=ac
r+s=-b
\tfracra
\tfracsa.
a(ax2+bx+c)=(ax-r)(ax-s).
For example, let consider the quadratic polynomial
6x2+13x+6.
r1=-
46 | |||
|
and r2=-
96 | - | |
= |
32, | |
and the factorization
6x2+13x+6=6(x+\tfrac23)(x+\tfrac32)=(3x+2)(2x+3).
ax2+bx+c
ax2+bx+c=a(x-\alpha)(x-\beta)=a\left(x-
-b+\sqrt{b2-4ac | |
\alpha
\beta
b2-4ac
The quadratic formula is valid when the coefficients belong to any field of characteristic different from two, and, in particular, for coefficients in a finite field with an odd number of elements.[5]
There are also formulas for roots of cubic and quartic polynomials, which are, in general, too complicated for practical use. The Abel–Ruffini theorem shows that there are no general root formulas in terms of radicals for polynomials of degree five or higher.
It may occur that one knows some relationship between the roots of a polynomial and its coefficients. Using this knowledge may help factoring the polynomial and finding its roots. Galois theory is based on a systematic study of the relations between roots and coefficients, that include Vieta's formulas.
Here, we consider the simpler case where two roots
x1
x2
P(x)
x2=Q(x1),
This implies that
x1
P(Q(x))
P(x).
P(x).
For example, if one know or guess that:
P(x)=x3-5x2-16x+80
P(x)
P(-x).
P(x)
P(-x),
-10(x2-16).
P(x)
x2-16
x3-5x2-16x+80=(x-5)(x-4)(x+4).
The integers and the polynomials over a field share the property of unique factorization, that is, every nonzero element may be factored into a product of an invertible element (a unit, ±1 in the case of integers) and a product of irreducible elements (prime numbers, in the case of integers), and this factorization is unique up to rearranging the factors and shifting units among the factors. Integral domains which share this property are called unique factorization domains (UFD).
Greatest common divisors exist in UFDs, but not every integral domain in which greatest common divisors exist (known as a GCD domain) is a UFD. Every principal ideal domain is a UFD.
A Euclidean domain is an integral domain on which is defined a Euclidean division similar to that of integers. Every Euclidean domain is a principal ideal domain, and thus a UFD.
In a Euclidean domain, Euclidean division allows defining a Euclidean algorithm for computing greatest common divisors. However this does not imply the existence of a factorization algorithm. There is an explicit example of a field such that there cannot exist any factorization algorithm in the Euclidean domain of the univariate polynomials over .
See main article: Dedekind domain. In algebraic number theory, the study of Diophantine equations led mathematicians, during 19th century, to introduce generalizations of the integers called algebraic integers. The first ring of algebraic integers that have been considered were Gaussian integers and Eisenstein integers, which share with usual integers the property of being principal ideal domains, and have thus the unique factorization property.
Unfortunately, it soon appeared that most rings of algebraic integers are not principal and do not have unique factorization. The simplest example is
Z[\sqrt{-5}],
9=3 ⋅ 3=(2+\sqrt{-5})(2-\sqrt{-5}),
This lack of unique factorization is a major difficulty for solving Diophantine equations. For example, many wrong proofs of Fermat's Last Theorem (probably including Fermat's "truly marvelous proof of this, which this margin is too narrow to contain") were based on the implicit supposition of unique factorization.
This difficulty was resolved by Dedekind, who proved that the rings of algebraic integers have unique factorization of ideals: in these rings, every ideal is a product of prime ideals, and this factorization is unique up the order of the factors. The integral domains that have this unique factorization property are now called Dedekind domains. They have many nice properties that make them fundamental in algebraic number theory.
Matrix rings are non-commutative and have no unique factorization: there are, in general, many ways of writing a matrix as a product of matrices. Thus, the factorization problem consists of finding factors of specified types. For example, the LU decomposition gives a matrix as the product of a lower triangular matrix by an upper triangular matrix. As this is not always possible, one generally considers the "LUP decomposition" having a permutation matrix as its third factor.
See Matrix decomposition for the most common types of matrix factorizations.
A logical matrix represents a binary relation, and matrix multiplication corresponds to composition of relations. Decomposition of a relation through factorization serves to profile the nature of the relation, such as a difunctional relation.