♯P-complete explained

The #P-complete problems (pronounced "sharp P complete" or "number P complete") form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties:

#P-complete problems are at least as hard as NP-complete problems.[1] A polynomial-time algorithm for solving a #P-complete problem, if it existed, would solve the P versus NP problem by implying that P and NP are equal. No such algorithm is known, nor is a proof known that such an algorithm does not exist.

Examples

Examples of #P-complete problems include:

These are all necessarily members of the class

  1. P
as well. As a non-example, consider the case of counting solutions to a 1-satisfiability problem: a series of variables that are each individually constrained, but have no relationships with each other. The solutions can be efficiently counted, by multiplying the number of options for each variable in isolation. Thus, this problem is in
  1. P
, but cannot be #P-complete unless
  1. P
=FP. This would be surprising, as it would imply that P=NP=PH.

Easy problems with hard counting versions

Some #P-complete problems correspond to easy (polynomial time) problems. Determining the satisfiability of a boolean formula in DNF is easy: such a formula is satisfiable if and only if it contains a satisfiable conjunction (one that does not contain a variable and its negation), whereas counting the number of satisfying assignments is #P-complete. Furthermore, deciding 2-satisfiability is easy compared to counting the number of satisfying assignments. Topologically sorting is easy in contrast to counting the number of topological sortings. A single perfect matching can be found in polynomial time, but counting all perfect matchings is #P-complete. The perfect matching counting problem was the first counting problem corresponding to an easy P problem shown to be #P-complete, in a 1979 paper by Leslie Valiant which also defined the class #P and the #P-complete problems for the first time.[3]

Approximation

There are probabilistic algorithms that return good approximations to some #P-complete problems with high probability. This is one of the demonstrations of the power of probabilistic algorithms.

Many #P-complete problems have a fully polynomial-time randomized approximation scheme, or "FPRAS," which, informally, will produce with high probability an approximation to an arbitrary degree of accuracy, in time that is polynomial with respect to both the size of the problem and the degree of accuracy required. Jerrum, Valiant, and Vazirani showed that every #P-complete problem either has an FPRAS, or is essentially impossible to approximate; if there is any polynomial-time algorithm which consistently produces an approximation of a #P-complete problem which is within a polynomial ratio in the size of the input of the exact answer, then that algorithm can be used to construct an FPRAS.[4]

References

  1. Valiant . Leslie G. . The Complexity of Enumeration and Reliability Problems . SIAM Journal on Computing . August 1979 . 8 . 3 . 410–421 . 10.1137/0208032 .
  2. Brightwell . Graham R. . Winkler . Peter . Peter Winkler . 10.1007/BF00383444 . 3 . . 225–242 . Counting linear extensions . 8 . 1991 . 119697949 . .
  3. Leslie G. Valiant . The Complexity of Computing the Permanent . Theoretical Computer Science . 8 . 189–201 . Elsevier . 1979 . 10.1016/0304-3975(79)90044-6 . 2. free .
  4. Mark R. Jerrum . Leslie G. Valiant . Vijay V. Vazirani . Random Generation of Combinatorial Structures from a Uniform Distribution . Theoretical Computer Science . 43 . 169–188 . Elsevier . 1986 . 10.1016/0304-3975(86)90174-x . free .

Further reading