The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.
It is of historical significance in the search for a polynomial-time deterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known.
Gary L. Miller discovered the test in 1976. Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis. Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm in 1980.
Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing.
The property is the following. For a given odd integer, let’s write as where s is a positive integer and d is an odd positive integer. Let’s consider an integer a, called a base, which is coprime to n.Then, n is said to be a strong probable prime to base a if one of these congruence relations holds:
ad\equiv1\pmodn
2rd | |
a |
\equiv-1\pmodn
ad\bmodn=1
2rd | |
a |
=n-1
The idea beneath this test is that when n is an odd prime, it passes the test because of two facts:
an-1\equiv1\pmod{n}
Hence, by contraposition, if n is not a strong probable prime to base a, then n is definitely composite, and a is called a witness for the compositeness of n.
However, this property is not an exact characterization of prime numbers. If n is composite, it may nonetheless be a strong probable prime to base a, in which case it is called a strong pseudoprime, and a is a strong liar.
No composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: the Carmichael numbers). However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this (see section Miller test below).
Another solution is to pick a base at random. This yields a fast probabilistic test. When n is composite, most bases are witnesses, so the test will detect n as composite with a reasonably high probability (see section Accuracy below). We can quickly reduce the probability of a false positive to an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. There seems to be diminishing returns in trying many bases, because if n is a pseudoprime to some base, then it seems more likely to be a pseudoprime to another base.
Note that holds trivially for, because the congruence relation is compatible with exponentiation. And holds trivially for since is odd, for the same reason. That is why random are usually chosen in the interval .
For testing arbitrarily large, choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3, ..., .
However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum. This maximum is generally quite large compared to the bases. This gives very fast deterministic tests for small enough n (see section Testing against small sets of bases below).
Here is a proof that, if n is a prime, then the only square roots of 1 modulo n are 1 and −1.
Here is a proof that, if n is an odd prime, then it is a strong probable prime to base a.
Suppose we wish to determine if
n=221
n-1as22 x 55
s=2andd=55
a
2\leqa\leqn-2
Say
a=174
{s0 | |
\begin{align} a |
d}modn → &
{20 | |
174 |
55}mod221\equiv17455\equiv47.Since47 ≠ 1and47 ≠ n-1,wecontinue.\\ &
{21 | |
174 |
55}mod221\equiv174110\equiv220=n-1 \end{align}
Since
220\equiv-1modn
a
a=137
{s0 | |
\begin{align} a |
d}modn → &
{20 | |
137 |
55}mod221\equiv13755\equiv188.Since188 ≠ 1and188 ≠ n-1,wecontinue.\\ &
{21 | |
137 |
55}mod221\equiv137110\equiv205 ≠ n-1 \end{align}
Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221 (which are 13 and 17). However, the example with 341 in a later section shows how these calculations can sometimes produce a factor of n.
For a practical guide to choosing the value of a see Testing against small sets of bases.
The algorithm can be written in pseudocode as follows. The parameter k determines the accuracy of the test. The greater the number of rounds, the more accurate the result.
Input #1: n > 2, an odd integer to be tested for primality Input #2: k, the number of rounds of testing to perform Output: “composite” if n is found to be composite, “probably prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 repeat k times: a ← random(2, n − 2) # n is always a probable prime to base 1 and n − 1 x ← ad mod n repeat s times: y ← x2 mod n if y = 1 and x ≠ 1 and x ≠ n − 1 then # nontrivial square root of 1 modulo n return “composite” x ← y if y ≠ 1 then return “composite” return “probably prime”
Using repeated squaring, the running time of this algorithm is, where n is the number tested for primality, and k is the number of rounds performed; thus this is an efficient, polynomial-time algorithm. FFT-based multiplication (Harvey-Hoeven algorithm) can decrease the running time to .
The error made by the primality test is measured by the probability that a composite number is declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most 4−k.
This is an improvement over the Solovay–Strassen test, whose worst‐case error bound is 2−k. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper.
In addition, for large values of n, the probability for a composite number to be declared probably prime is often significantly smaller than 4−k. For instance, for most numbers n, this probability is bounded by 8−k; the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n. Hence the average case has a much better accuracy than 4−k, a fact which can be exploited for generating probable primes (see below). However, such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled, since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test.In such contexts, only the worst‐case error bound of 4−k can be relied upon.
The above error measure is the probability for a composite number to be declared as a strong probable prime after k rounds of testing; in mathematical words, it is the conditional probability
\Pr(MRk\midlnotP)
\Pr(lnotP\midMRk)
\begin{align} \Pr(lnotP\midMRk) &=
\Pr(lnotP\landMRk) | |
\Pr(lnotP\landMRk)+\Pr(P\landMRk) |
\\ &=
1 | ||||||||
|
\\ &=
1 | ||||||||
|
\end{align}
In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes (the test has no false negative). By dropping the left part of the denominator, we derive a simple upper bound:
\Pr(lnotP\midMRk)<\Pr(MRk\midlnotP)\left(\tfrac{1}{\Pr(P)}-1\right)
Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4−k — but also to the probability distribution of the input number. In the general case, as said earlier, this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much about
\Pr(lnotP\midMRk)
Caldwell points out that strong probable prime tests to different bases sometimes provide an additional primality test. Just as the strong test checks for the existence of more than two square roots of 1 modulo n, two such tests can sometimes check for the existence of more than two square roots of −1.
Suppose, in the course of our probable prime tests, come across two bases a and for which
2rd | |
a |
\equiv
\prime2r'd | |
a |
\equiv-1\pmodn
2r-1d | |
a |
\equiv\pm
\prime2r'-1d | |
a |
\pmodn
This is only possible if n ≡ 1 (mod 4), and we pass probable prime tests with two or more bases a such that ad ≢ ±1 (mod n), but it is an inexpensive addition to the basic Miller-Rabin test.
The Miller–Rabin algorithm can be made deterministic by trying all possible values of a below a certain limit. Taking n as the limit would imply trials, hence the running time would be exponential with respect to the size of the input. To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable.
If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group (Z/nZ)*, which means that if we test all a from a set which generates (Z/nZ)*, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of n. Assuming the truth of the extended Riemann hypothesis (ERH), it is known that the group is generated by its elements smaller than, which was already noted by Miller. The constant involved in the Big O notation was reduced to 2 by Eric Bach. This leads to the following primality testing algorithm, known as the Miller test, which is deterministic assuming the ERH:
Input: n > 2, an odd integer to be tested for primality Output: “composite” if n is composite, “prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 for all a in the range [2, min(''n'' − 2, ⌊2(ln ''n'')<sup>2</sup>⌋)]: x ← ad mod n repeat s times: y ← x2 mod n if y = 1 and x ≠ 1 and x ≠ n − 1 then # nontrivial square root of 1 modulo n return “composite” x ← y if y ≠ 1 then return “composite” return “prime”
The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.
The running time of the algorithm is, in the soft-O notation, (using FFT‐based multiplication).
The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or the Baillie–PSW primality test gives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the AKS primality test, which also does not rely on unproven assumptions.
When the number n to be tested is small, trying all is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff[1] and Jaeschke have verified that
Using the 2010 work of Feitsma and Galway[2] enumerating all base 2 pseudoprimes up to 264, this was extended (see), with the first result later shown using different methods in Jiang and Deng:[3]
Sorenson and Webster[4] verify the above and calculate precise results for these larger than 64‐bit results:
Other criteria of this sort, often more efficient (fewer bases required) than those shown above, exist.[5] [6] [7] They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.
There is a small list of potential witnesses for every possible input size (at most b values for b‐bit numbers). However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least . They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order
By inserting greatest common divisor calculations into the above algorithm, we can sometimes obtain a factor of n instead of merely determining that n is composite. This occurs for example when n is a probable prime to base a but not a strong probable prime to base a.[8]
If x is a nontrivial square root of 1 modulo n,
From this we deduce that and are nontrivial (not necessarily prime) factors of n (in fact, since n is odd, these factors are coprime and n = AB). Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted:
Input #1: n > 2, an odd integer to be tested for primality Input #2: k, the number of rounds of testing to perform Output: “composite” if n is otherwise found to be composite, “probably prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1 repeat k times: a ← random(2, n − 2) # n is always a probable prime to base 1 and n − 1 x ← ad mod n repeat s times: y ← x2 mod n if y = 1 and x ≠ 1 and x ≠ n − 1 then # nontrivial square root of 1 modulo n x ← y if y ≠ 1 then return “composite” return “probably prime”
This is not a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a (in other words, for numbers n such that). For other numbers, the algorithm only returns “composite” with no further information.
For example, consider n = 341 and a = 2. We have . Then and . This tells us that n is a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a gcd at this stage, we find a factor of 341: . Indeed, .
The same technique can be applied to the square roots of any other value, particularly the square roots of −1 mentioned in . If two (successful) strong probable prime tests find and, but, then and are nontrivial factors of n.[5]
For example, is a strong pseudoprime to bases 2 and 7, but in the course of performing the tests we find
2(n-1)/2\equiv7(n-1)/2\equiv-1\pmodn,
2(n-1)/4\equiv34456063004337\pmodn,and
7(n-1)/4\equiv21307242304265\pmodn.
The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminates almost surely (since at each iteration there is a chance to draw a prime number). The pseudocode for generating b‐bit strong probable primes (with the most significant bit set) is as follows:
Input #1: b, the number of bits of the result Input #2: k, the number of rounds of testing to perform Output: a strong probable prime n
while True: pick a random odd integer n in the range [2<sup>''b''−1</sup>, 2<sup>''b''</sup>−1] if the Miller–Rabin test with inputs n and k returns “probably prime” then return n
Of course the worst-case running time is infinite, since the outer loop may never terminate, but that happens with probability zero. As per the geometric distribution, the expected number of draws is
\tfrac{1}{\Pr(MRk)}
As any prime number passes the test, the probability of being prime gives a coarse lower bound to the probability of passing the test. If we draw odd integers uniformly in the range [2<sup>''b''−1</sup>, 2<sup>''b''</sup>−1], then we get:
\Pr(MRk)>\Pr(P)=
\pi\left(2b\right)-\pi\left(2b-1\right) | |
2b-2 |
where π is the prime-counting function. Using an asymptotic expansion of π (an extension of the prime number theorem), we can approximate this probability when b grows towards infinity. We find:
\Pr(P)=\tfrac{2}{ln2}b-1+l{O}\left(b-3\right)
\tfrac{1}{\Pr(P)}=\tfrac{ln2}{2}b+l{O}\left(b-1\right)
Hence we can expect the generator to run no more Miller–Rabin tests than a number proportional to b. Taking into account the worst-case complexity of each Miller–Rabin test (see earlier), the expected running time of the generator with inputs b and k is then bounded by (or using FFT-based multiplication).
The error measure of this generator is the probability that it outputs a composite number.
Using the relation between conditional probabilities (shown in an earlier section) and the asymptotic behavior of
\Pr(P)
\Pr(lnotP\midMRk) < \Pr(MRk\midlnotP)\left(\tfrac{1}{\Pr(P)}-1\right) \leq 4-k\left(\tfrac{ln2}{2}b-1+l{O}\left(b-1\right)\right).
Hence, for large enough b, this error measure is less than
\tfrac{ln2}{2}4-kb
Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4−k (see earlier), Damgård, Landrock and Pomerance derived several error bounds for the generator, with various classes of parameters b and k. These error bounds allow an implementor to choose a reasonable k for a desired accuracy.
One of these error bounds is 4−k, which holds for all b ≥ 2 (the authors only showed it for b ≥ 51, while Ronald Burthe Jr. completed the proof with the remaining values 2 ≤ b ≤ 50). Again this simple bound can be improved for large values of b. For instance, another bound derived by the same authors is:
\left( | 1 |
7 |
| ||||
b |
| ||||
2 |
\right)4-k