In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.
More formally, an algorithm is asymptotically optimal with respect to a particular resource if the problem has been proven to require of that resource, and the algorithm has been proven to use only
These proofs require an assumption of a particular model of computation, i.e., certain restrictions on operations allowable with the input data.
As a simple example, it's known that all comparison sorts require at least comparisons in the average and worst cases. Mergesort and heapsort are comparison sorts which perform comparisons, so they are asymptotically optimal in this sense.
If the input data have some a priori properties which can be exploited in construction of algorithms, in addition to comparisons, then asymptotically faster algorithms may be possible. For example, if it is known that the objects are integers from the range then they may be sorted time, e.g., by the bucket sort.
A consequence of an algorithm being asymptotically optimal is that, for large enough inputs, no algorithm can outperform it by more than a constant factor. For this reason, asymptotically optimal algorithms are often seen as the "end of the line" in research, the attaining of a result that cannot be dramatically improved upon. Conversely, if an algorithm is not asymptotically optimal, this implies that as the input grows in size, the algorithm performs increasingly worse than the best possible algorithm.
In practice it's useful to find algorithms that perform better, even if they do not enjoy any asymptotic advantage. New algorithms may also present advantages such as better performance on specific inputs, decreased use of resources, or being simpler to describe and implement. Thus asymptotically optimal algorithms are not always the "end of the line".
Although asymptotically optimal algorithms are important theoretical results, an asymptotically optimal algorithm might not be used in a number of practical situations:
An example of an asymptotically optimal algorithm not used in practice is Bernard Chazelle's linear-time algorithm for triangulation of a simple polygon. Another is the resizable array data structure published in "Resizable Arrays in Optimal Time and Space", which can index in constant time but on many machines carries a heavy practical penalty compared to ordinary array indexing.
Formally, suppose that we have a lower-bound theorem showing that a problem requires Ω(f(n)) time to solve for an instance (input) of size n (see for the definition of Ω). Then, an algorithm which solves the problem in O(f(n)) time is said to be asymptotically optimal. This can also be expressed using limits: suppose that b(n) is a lower bound on the running time, and a given algorithm takes time t(n). Then the algorithm is asymptotically optimal if:
\limn → infty
t(n) | |
b(n) |
<infty.
This limit, if it exists, is always at least 1, as t(n) ≥ b(n).
Although usually applied to time efficiency, an algorithm can be said to use asymptotically optimal space, random bits, number of processors, or any other resource commonly measured using big-O notation.
Sometimes vague or implicit assumptions can make it unclear whether an algorithm is asymptotically optimal. For example, a lower bound theorem might assume a particular abstract machine model, as in the case of comparison sorts, or a particular organization of memory. By violating these assumptions, a new algorithm could potentially asymptotically outperform the lower bound and the "asymptotically optimal" algorithms.
The nonexistence of an asymptotically optimal algorithm is called speedup. Blum's speedup theorem shows that there exist artificially constructed problems with speedup. However, it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an
O(n\alpha(n))
\alpha(n)
\Omega(n)