The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P".[1] The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite and this without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis.[2] In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.
AKS is the first primality-proving algorithm to be simultaneously general, polynomial-time, deterministic, and unconditionally correct. Previous algorithms had been developed for centuries and achieved three of these properties at most, but not all four.
While the algorithm is of immense theoretical importance, it is not used in practice, rendering it a galactic algorithm. For 64-bit inputs, the Baillie–PSW test is deterministic and runs many orders of magnitude faster. For larger inputs, the performance of the (also unconditionally correct) ECPP and APR tests is far superior to AKS. Additionally, ECPP can output a primality certificate that allows independent and rapid verification of the results, which is not possible with the AKS algorithm.
The AKS primality test is based upon the following theorem: Given an integer
n\ge2
a
n
n
(Z/nZ)[X]
X
This theorem is a generalization to polynomials of Fermat's little theorem. In one direction it can easily be proven using the binomial theorem together with the following property of the binomial coefficient:
{n\choosek}\equiv0\pmod{n}
0<k<n
n
While the relation constitutes a primality test in itself, verifying it takes exponential time: the brute force approach would require the expansion of the
(X+a)n
\pmod{n}
n+1
(Z/nZ)[X]
(Z/nZ)[X]
(Z/nZ)[X]/(Xr-1)
r
f
g
Note that all primes satisfy this relation (choosing
g=0
n
r
n
a
n
r
a
n
In the first version of the above-cited paper, the authors proved the asymptotic time complexity of the algorithm to be
\tilde{O}(log(n)12)
\tilde{O}(log(n)6)
In the months following the discovery, new variants appeared (Lenstra 2002, Pomerance 2002, Berrizbeitia 2002, Cheng 2003, Bernstein 2003a/b, Lenstra and Pomerance 2003), which improved the speed of computation greatly. Owing to the existence of the many variants, Crandall and Papadopoulos refer to the "AKS-class" of algorithms in their scientific paper "On the implementation of AKS-class primality tests", published in March 2003.
In response to some of these variants, and to other feedback, the paper "PRIMES is in P" was updated with a new formulation of the AKS algorithm and of its proof of correctness. (This version was eventually published in Annals of Mathematics.) While the basic idea remained the same, r was chosen in a new manner, and the proof of correctness was more coherently organized. The new proof relied almost exclusively on the behavior of cyclotomic polynomials over finite fields. The new upper bound on time complexity was
\tilde{O}(log(n)10.5)
\tilde{O}(log(n)7.5)
In 2005, Pomerance and Lenstra demonstrated a variant of AKS that runs in
\tilde{O}(log(n)6)
\tilde{O}(log(n)3)
The algorithm is as follows:
Input: integer .
\left\lfloor\sqrt{\varphi(r)}log2(n)\right\rfloor
if (X+a)n ≠ Xn+a (mod Xr − 1,n), then output composite;
Here ordr(n) is the multiplicative order of n modulo r, log2 is the binary logarithm, and
\varphi(r)
Step 3 is shown in the paper as checking 1 < (a,n) < n for all a ≤ r. It can be seen this is equivalent to trial division up to r, which can be done very efficiently without using gcd. Similarly the comparison in step 4 can be replaced by having the trial division return prime once it has checked all values up to and including
\left\lfloor\sqrt{n}\right\rfloor.
Once beyond very small inputs, step 5 dominates the time taken. The essential reduction in complexity (from exponential to polynomial) is achieved by performing all calculations in the finite ring
R=(Z/nZ)[X]/(Xr-1)
nr
r
\{X0,X1,\ldots,Xr-1\}
Z/nZ
n
log2(n)
Most later improvements made to the algorithm have concentrated on reducing the size of r, which makes the core operation in step 5 faster, and in reducing the size of s, the number of loops performed in step 5.[5] Typically these changes do not change the computational complexity, but can lead to many orders of magnitude less time taken; for example, Bernstein's final version has a theoretical speedup by a factor of over 2 million.
For the algorithm to be correct, all steps that identify n must be correct. Steps 1, 3, and 4 are trivially correct, since they are based on direct tests of the divisibility of n. Step 5 is also correct: since (2) is true for any choice of a coprime to n and r if n is prime, an inequality means that n must be composite.
The difficult part of the proof is showing that step 6 is true. Its proof of correctness is based on the upper and lower bounds of a multiplicative group in
Zn[x]
\left\lfloor\sqrt{\varphi(r)}log2(n)\right\rfloor
Zn[x]