In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.
Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb.
A language L is in P if and only if there exists a deterministic Turing machine M, such that
P can also be viewed as a uniform family of Boolean circuits. A language L is in P if and only if there exists a polynomial-time uniform family of Boolean circuits
\{Cn:n\inN\}
n\inN
Cn
C|x|(x)=1
C|x|(x)=0
The circuit definition can be weakened to use only a logspace uniform family without changing the complexity class.
P is known to contain many natural problems, including the decision versions of linear programming, and finding a maximum matching. In 2002, it was shown that the problem of determining if a number is prime is in P.[1] The related class of function problems is FP.
Several natural problems are complete for P, including st-connectivity (or reachability) on alternating graphs.[2] The article on P-complete problems lists further relevant problems in P.
A generalization of P is NP, which is the class of decision problems decidable by a non-deterministic Turing machine that runs in polynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is called co-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset,[3] although this belief (the
P\subsetneqNP
P\subsetneqNP
P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. A decider using
O(logn)
2O(log=nO(1)
L\subseteqAL=P\subseteqNP\subseteqPSPACE\subseteqEXPTIME.
Here, EXPTIME is the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known:
The most difficult problems in P are P-complete problems.
Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all of BPP. If it contains NP, then the polynomial hierarchy collapses to the second level. On the other hand, it also contains some impractical problems, including some undecidable problems such as the unary version of any undecidable problem.
In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language that is P-complete, then L = P.[5]
P is contained in BQP, it is unknown whether the containment is strict.
Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine.
Languages in P are also closed under reversal, intersection, union, concatenation, Kleene closure, inverse homomorphism, and complementation.[6]
Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, the Robertson–Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.
In descriptive complexity, P can be described as the problems expressible in FO(LFP), the first-order logic with a least fixed point operator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity,[7] Immerman ascribes this result to Vardi[8] and to Immerman.[9]
It was published in 2001 that PTIME corresponds to (positive) range concatenation grammars.[10]
P can also be defined as an algorithmic complexity class for problems that are not decision problems[11] (even though, for example, finding the solution to a 2-satisfiability instance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems.
Kozen[12] states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time," though Rabin also invented the notion independently and around the same time (Rabin's paper[13] was in a 1967 proceedings of a 1966 conference, while Cobham's[14] was in a 1965 proceedings of a 1964 conference and Edmonds's[15] was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading to Cobham's thesis. However, H. C. Pocklington, in a 1910 paper,[16] [17] analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.