A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
The oldest and simplest method, known since antiquity as long multiplication or grade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a time complexity of
O(n2)
In 1960, Anatoly Karatsuba discovered Karatsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply complex numbers quickly.) Done recursively, this has a time complexity of
log23 | |
O(n |
)
In 1968, the Schönhage-Strassen algorithm, which makes use of a Fourier transform over a modulus, was discovered. It has a time complexity of
O(nlognloglogn)
O(nlogn
\Theta(log*n) | |
2 |
)
O(nlogn
3log*n | |
2 |
)
O(nlogn
2log*n | |
2 |
)
O(nlogn)
Integer multiplication algorithms can also be used to multiply polynomials by means of the method of Kronecker substitution.
If a positional numeral system is used, a natural way of multiplying numbers is taught in schoolsas long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm:multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits.
This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed.
This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 (= 23,958,233 × 0) 71874699 (= 23,958,233 × 30) 191665864 (= 23,958,233 × 800) + 119791165 (= 23,958,233 × 5,000) ——————————————— 139676498390 (= 139,676,498,390)
In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:[1]
23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390
Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness.
When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP.[2]
Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode.
On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition.
2n
2n\pm1
In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable.
See main article: Grid method multiplication. The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.[3]
Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage.
The calculation 34 × 13, for example, could be computed using the grid:
300 40 90 + 12 ———— 442
× | 30 | 4 | |
---|---|---|---|
10 | 300 | 40 | |
3 | 90 | 12 |
followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442.
This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage.
The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need.
See main article: Lattice multiplication.
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire.[4] Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death.
As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.
The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown.
2 3 9 5 8 2 3 3 +---+---+---+---+---+---+---+---+- | 1 / | 1 / | 4 / | 2 / | 4 / | 1 / | 1 / | 1 / | / | / | / | / | / | / | / | / | 5 01 | / 0 | / 5 | / 5 | / 5 | / 0 | / 0 | / 5 | / 5 | +---+---+---+---+---+---+---+---+- | 1 / | 2 / | 7 / | 4 / | 6 / | 1 / | 2 / | 2 / | / | / | / | / | / | / | / | / | 8 02 | / 6 | / 4 | / 2 | / 0 | / 4 | / 6 | / 4 | / 4 | +---+---+---+---+---+---+---+---+- | 0 / | 0 / | 2 / | 1 / | 2 / | 0 / | 0 / | 0 / | / | / | / | / | / | / | / | / | 3 17 | / 6 | / 9 | / 7 | / 5 | / 4 | / 6 | / 9 | / 9 | +---+---+---+---+---+---+---+---+- | 0 / | 0 / | 0 / | 0 / | 0 / | 0 / | 0 / | 0 / | / | / | / | / | / | / | / | / | 0 24 | / 0 | / 0 | / 0 | / 0 | / 0 | / 0 | / 0 | / 0 | +---+---+---+---+---+---+---+---+- 26 15 13 18 17 13 09 00 | 01 002 0017 00024 000026 0000015 00000013 000000018 0000000017 00000000013 000000000009 0000000000000 ————————————— 139676498390 | ||||
= 139,676,498,390 |
See main article: Peasant multiplication. The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication.[5] The algorithm was in use in ancient Egypt.[6] Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.
This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33.
Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 —— —————— 33 100001
Describing the steps explicitly:
The method works because multiplication is distributive, so:
\begin{align} 3 x 11&=3 x (1 x 20+1 x 21+0 x 22+1 x 23)\\ &=3 x (1+2+8)\\ &=3+6+24\\ &=33. \end{align}
A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830):
Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110
This formula can in some cases be used, to make multiplication tasks easier to complete:
\left(x+y\right)2 | |
4 |
-
\left(x-y\right)2 | = | |
4 |
1 | |
4 |
\left(\left(x2+2xy+y2\right)-\left(x2-2xy+y2\right)\right)=
1 | |
4 |
\left(4xy\right)=xy.
In the case where
x
y
(x+y)2\equiv(x-y)2\bmod4
x+y
x-y
\begin{align} xy&=
14(x+y) | |
2 |
-
14(x-y) | |
2 |
\\ &=\left((x+y)2div4\right)-\left((x-y)2div4\right) \end{align}
Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to .
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | ||
0 | 0 | 1 | 2 | 4 | 6 | 9 | 12 | 16 | 20 | 25 | 30 | 36 | 42 | 49 | 56 | 64 | 72 | 81 |
If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3.
In prehistoric time, quarter square multiplication involved floor function; that some sources[7] attribute to Babylonian mathematics (2000–1600 BC).
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888.
Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier.
In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29-1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29-1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025).
The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502.[8]
A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two
n
O(n2)
Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only
O(nlogn)
\Omega(nlogn)
See main article: Karatsuba algorithm.
Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations.
By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner.
Let
x
y
n
B
m
n
x=x1Bm+x0,
y=y1Bm+y0,
where
x0
y0
Bm
\begin{align} xy&=(x1Bm+x0)(y1Bm+y0)\\ &=x1y1B2m+(x1y0+x0y1)Bm+x0y0\\ &=z2B2m+z1Bm+z0,\\ \end{align}
where
z2=x1y1,
z1=x1y0+x0y1,
z0=x0y0.
These formulae require four multiplications and were known to Charles Babbage.[10] Karatsuba observed that
xy
z0
z2
\begin{align} z1&=x1y0+x0y1\\ &=x1y0+x0y1+x1y1-x1y1+x0y0-x0y0\\ &=x1y0+x0y0+x0y1+x1y1-x1y1-x0y0\\ &=(x1+x0)y0+(x0+x1)y1-x1y1-x0y0\\ &=(x1+x0)(y0+y1)-x1y1-x0y0\\ &=(x1+x0)(y1+y0)-z2-z0.\\ \end{align}
Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n.
By exploring patterns after expansion, one see following:
(x1B+x0)(y1Bm+y0)(z1B+z0)(a1B+a0)=
a1x1y1z1B4m+a1x1y1z0B3m+a1x1y0z1B3+a1x0y1z1B3
+a0x1y1z1B3+a1x1y0z0B2+a1x0y1z0B2+a0x1y1z0B2
+a1x0y0z1B2+a0x1y0z1B2+a0x0y1z1B2+a1x0y0z0B
+a0x1y0z0Bm+a0x0y1z0Bm+a0x0y0z1B+a0x0y0z0
Each summand is associated to a unique binary number from 0 to
2N+1-1
a1x1y1z1\longleftrightarrow1111, a1x0y1z0\longleftrightarrow1010
If we express this in fewer terms, we get:
N | |
\prod | |
j=1 |
(xj,1B+xj,0)=
2N+1-1 | |
\sum | |
i=1 |
N | |
\prod | |
j=1 |
xj,c(i,j)
| ||||||||||
B |
=
N | |
\sum | |
j=0 |
jm | |
z | |
jB |
c(i,j)
c(i,j)\in\{0,1\}
z0=
N | |
\prod | |
j=1 |
xj,0
zN=
N | |
\prod | |
j=1 |
xj,1
zN-1=
N | |
\prod | |
j=1 |
(xj,0+xj,1)-
N | |
\sum | |
i\neN-1 |
zi
Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication,[11] and can thus be viewed as the starting point for the theory of fast multiplications.
See main article: Toom–Cook multiplication. Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3.
Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.
See main article: Schönhage–Strassen algorithm.
Every number in base B, can be written as a polynomial:
X=
N | |
\sum | |
i=0 |
i} | |
{x | |
iB |
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
XY=
N | |
(\sum | |
i=0 |
N | |
{x | |
j=0 |
j}) | |
{y | |
iB |
Because,for
Bk
ck=\sum(i,j):i+j=k{aibj}=
k | |
\sum | |
i=0 |
{aibk-i
By using fft (fast fourier transformation) with convolution rule, we can get
\hat{f}(a*b)=
k | |
\hat{f}(\sum | |
i=0 |
{aibk-i
\hat{f}(b)
Ck=ak
bk
Ck
We have the same coefficient due to linearity under fourier transformation, and because these polynomials only consist of one unique term per coefficient:
\hat{f}(xn)=\left(
i | |
2\pi |
\right)n\delta(n)
\hat{f}(aX(\xi)+bY(\xi))=a\hat{X}(\xi)+b\hat{Y}(\xi)
Convolution rule:
\hat{f}(X*Y)= \hat{f}(X)
\hat{f}(Y)
We have reduced our convolution problemto product problem, through fft.
By finding ifft (polynomial interpolation), for each
ck
It has a time complexity of O(n log(n) log(log(n))).
The algorithm was invented by Strassen (1968). It was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm.[12]
In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to n log(n) 2Θ(log*(n)) using Fourier transforms over complex numbers,[13] where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time.[14] In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs.
In 2014, Harvey, Joris van der Hoeven and Lecerf[15] gave a new algorithm that achieves a running time of
O(nlogn ⋅
3log*n | |
2 |
)
O(log*n)
O(nlogn ⋅
2log*n | |
2 |
)
O(nlogn ⋅
2log*n | |
2 |
)
O(nlogn ⋅
2log*n | |
2 |
)
In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an multiplication algorithm.[18] It was published in the Annals of Mathematics in 2021.[19] Because Schönhage and Strassen predicted that n log(n) is the "best possible" result, Harvey said: "...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously."[20]
There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0[''p''] for any prime p, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication.[21] Lower bounds for multiplication are also known for some classes of branching programs.[22]
Complex multiplication normally involves four multiplications and two additions.
(a+bi)(c+di)=(ac-bd)+(bc+ad)i.
Or
\begin{array}{c|c|c} x &a&bi\\ \hline c&ac&bci\\ \hline di&adi&-bd \end{array}
As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product (a + bi) · (c + di) can be calculated in the following way.
k1 = c · (a + b)
k2 = a · (d − c)
k3 = b · (c + d)
Real part = k1 − k3
Imaginary part = k1 + k2.
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients c + di (called twiddle factors in FFTs), in which case two of the additions (d−c and c+d) can be precomputed. Hence, only three multiplies and three adds are required.[23] However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units.[24]
All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication.[25]
Long multiplication methods can be generalised to allow the multiplication of algebraic formulae:
14ac - 3ab + 2 multiplied by ac - ab + 1
14ac -3ab 2 ac -ab 1 ———————————————————— 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 ——————————————————————————————————————— 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2 =======================================[26]
As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr.
t cwt qtr 23 12 2 47 x ———————————————— 141 94 94 940 470 29 23 ———————————————— 1110 587 94 ———————————————— 1110 7 2 ================= Answer: 1110 ton 7 cwt 2 qtr
First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down.
The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system.
O(nlogn)
O(nlogn)