The vast majority of positive results about computational problems are constructive proofs, i.e., a computational problem is proved to be solvable by showing an algorithm that solves it; a computational problem is shown to be in P (complexity) by showing an algorithm that solves it in time that is polynomial in the size of the input; etc.
However, there are several non-constructive results, where an algorithm is proved to exist without showing the algorithm itself. Several techniques are used to provide such existence proofs.
A simple example of a non-constructive algorithm was published in 1982 by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy, in their book Winning Ways for Your Mathematical Plays. It concerns the game of Sylver Coinage, in which players take turns specifying a positive integer that cannot be expressed as a sum of previously specified values, with a player losing when they are forced to specify the number 1. There exists an algorithm (given in the book as a flow chart) for determining whether a given first move is winning or losing: if it is a prime number greater than three, or one of a finite set of 3-smooth numbers, then it is a winning first move, and otherwise it is losing. However, the finite set is not known.
Non-constructive algorithm proofs for problems in graph theory were studied beginning in 1988 by Michael Fellows and Michael Langston.[1]
A common question in graph theory is whether a certain input graph has a certain property. For example:
Input: a graph G.
Question: Can G be embedded in a 3-dimensional space, such that no two disjoint cycles of G are topologically linked (as in links of a chain)?
There is a highly exponential algorithm that decides whether two cycles embedded in a 3d-space are linked, and one could test all pairs of cycles in the graph, but it is not obvious how to account for all possible embeddings in a 3d-space. Thus, it is a-priori not clear at all if the linkedness problem is decidable.
However, there is a non-constructive proof that shows that linkedness is decidable in polynomial time. The proof relies on the following facts:
Given an input graph G, the following "algorithm" solves the above problem:
For every minor-minimal element H:
If H is a minor of G then return "yes".
return "no".
The non-constructive part here is the Robertson–Seymour theorem. Although it guarantees that there is a finite number of minor-minimal elements it does not tell us what these elements are. Therefore, we cannot really execute the "algorithm" mentioned above. But, we do know that an algorithm exists and that its runtime is polynomial.
There are many more similar problems whose decidability can be proved in a similar way. In some cases, the knowledge that a problem can be proved in a polynomial time has led researchers to search and find an actual polynomial-time algorithm that solves the problem in an entirely different way. This shows that non-constructive proofs can have constructive outcomes.
The main idea is that a problem can be solved using an algorithm that uses, as a parameter, an unknown set. Although the set is unknown, we know that it must be finite, and thus a polynomial-time algorithm exists.
There are many other combinatorial problems that can be solved with a similar technique.[2]
Sometimes the number of potential algorithms for a given problem is finite. We can count the number of possible algorithms and prove that only a bounded number of them are "bad", so at least one algorithm must be "good".
As an example, consider the following problem.[3]
I select a vector v composed of n elements which are integers between 0 and a certain constant d.
You have to guess v by asking sum queries, which are queries of the form: "what is the sum of the elements with indices i and j?". A sum query can relate to any number of indices from 1 to n.
How many queries do you need? Obviously, n queries are always sufficient, because you can use n queries asking for the "sum" of a single element. But when d is sufficiently small, it is possible to do better. The general idea is as follows.
Every query can be represented as a 1-by-n vector whose elements are all in the set . The response to the query is just the dot product of the query vector by v. Every set of k queries can be represented by a k-by-n matrix over ; the set of responses is the product of the matrix by v.
A matrix M is "good" if it enables us to uniquely identify v. This means that, for every vector v, the product M v is unique. A matrix M is "bad" if there are two different vectors, v and u, such that M v = M u.
Using some algebra, it is possible to bound the number of "bad" matrices. The bound is a function of d and k. Thus, for a sufficiently small d, there must be a "good" matrix with a small k, which corresponds to an efficient algorithm for solving the identification problem.
This proof is non-constructive in two ways: it is not known how to find a good matrix; and even if a good matrix is supplied, it is not known how to efficiently re-construct the vector from the query replies.
There are many more similar problems which can be proved to be solvable in a similar way.[3]
The references in this page were collected from the following Stack Exchange threads: