In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time or memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. counting problems and function problems) and using other models of computation (e.g. probabilistic Turing machines, interactive proof systems, Boolean circuits, and quantum computers).
The study of the relationships between complexity classes is a major area of research in theoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way: L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆NEXPTIME⊆EXPSPACE (where ⊆ denotes the subset relation). However, many relationships are not yet known; for example, one of the most famous open problems in computer science concerns whether P equals NP. The relationships between classes often answer questions about the fundamental nature of computation. The P versus NP problem, for instance, is directly related to questions of whether nondeterminism adds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved.
Complexity classes are sets of related computational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist of decision problems that can be solved by a Turing machine with bounded time or space resources. For example, the complexity class P is defined as the set of decision problems that can be solved by a deterministic Turing machine in polynomial time.
n
tt{PRIME}
tt{PRIME}=\{n\inN|nisprime\}
tt{PRIME}
tt{PRIME}
n
tt{PRIME}
While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems. Other types of problems that certain complexity classes are defined in terms of include:
To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of a computational model. Computational models make exact the notions of computational resources like "time" and "memory". In computational complexity theory, complexity classes deal with the inherent resource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences in processor speed) that obstruct an understanding of fundamental principles.
The most commonly used computational model is the Turing machine. While other models exist and many complexity classes are defined in terms of them (see section "Other models of computation"), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory like bytes, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below.
It is also possible to use the Blum axioms to define complexity classes without referring to a concrete computational model, but this approach is less frequently used in complexity theory.
See main article: Turing machine. A Turing machine is a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as the Church–Turing thesis); this means that it is believed that every algorithm can be represented as a Turing machine.
Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "deterministic").
A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problem
tt{PRIME}
Turing machines enable intuitive notions of "time" and "space". The time complexity of a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. The space complexity is the number of cells on its tape that it uses to reach either an accept or reject state.
See main article: Nondeterministic Turing machine.
The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch. NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions).
The time complexity of an NTM is the maximum number of steps that the NTM uses on any branch of its computation. Similarly, the space complexity of an NTM is the maximum number of cells that the NTM uses on any branch of its computation.
DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory.
Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated by upper bounds on the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with the rate of growth in the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity class P grows at a polynomial rate as the input size increases, which is comparatively slow compared to problems in the exponential complexity class EXPTIME (or more accurately, for problems in EXPTIME that are outside of P, since
P\subseteqEXPTIME
Note that the study of complexity classes is intended primarily to understand the inherent complexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using the most efficient algorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial.
See main article: Time complexity. The time complexity of an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machine
M
tM:N\toN
tM(n)
M
n
In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function a polynomial? A logarithmic function? An exponential function? Or another kind of function?
See main article: Space complexity. The space complexity of an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machine
M
sM:N\toN
sM(n)
M
n
See also: List of complexity classes.
Complexity classes are often defined using granular sets of complexity classes called DTIME and NTIME (for time complexity) and DSPACE and NSPACE (for space complexity). Using big O notation, they are defined as follows:
DTIME(t(n))
O(t(n))
NTIME(t(n))
O(t(n))
DSPACE(s(n))
O(s(n))
NSPACE(s(n))
O(s(n))
See main article: Time complexity.
See main article: P (complexity) and NP (complexity). P is the class of problems that are solvable by a deterministic Turing machine in polynomial time and NP is the class of problems that are solvable by a nondeterministic Turing machine in polynomial time. Or more formally,
P=cupk\inNDTIME(nk)
NP=cupk\inNNTIME(nk)
P is often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since the time complexity of solving a problem in P increases relatively slowly with the input size.
An important characteristic of the class NP is that it can be equivalently defined as the class of problems whose solutions are verifiable by a deterministic Turing machine in polynomial time. That is, a language is in NP if there exists a deterministic polynomial time Turing machine, referred to as the verifier, that takes as input a string
w
c
w
w
w
w
w
NP is the class of languages
L
M
p
w\in\{0,1\}*
w
L
c\in\{0,1\}p(|w|)
M(w,c)
This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection between nondeterminism and solution verifiability. Furthermore, it also provides a useful method for proving that a language is in NP—simply identify a suitable certificate and show that it can be verified in polynomial time.
While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable, P and NP are actually at the center of one of the most famous unsolved problems in computer science: the P versus NP problem. While it is known that
P\subseteqNP
P ≠ NP
P ≠ NP
See main article: EXPTIME and NEXPTIME. EXPTIME (sometimes shortened to EXP) is the class of decision problems solvable by a deterministic Turing machine in exponential time and NEXPTIME (sometimes shortened to NEXP) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally,
EXPTIME=cupk\inN
nk | |
DTIME(2 |
)
NEXPTIME=cupk\inN
nk | |
NTIME(2 |
)
EXPTIME is a strict superset of P and NEXPTIME is a strict superset of NP. It is further the case that EXPTIME
\subseteq
See main article: Space complexity.
See main article: L (complexity) and NL (complexity). While it is possible to define logarithmic time complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (because
logn<n
L (sometimes lengthened to LOGSPACE) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine and NL (sometimes lengthened to NLOGSPACE) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,
L=DSPACE(logn)
NL=NSPACE(logn)
It is known that
L\subseteqNL\subseteqP
See main article: PSPACE (complexity). The complexity classes PSPACE and NPSPACE are the space analogues to P and NP. That is, PSPACE is the class of problems solvable in polynomial space by a deterministic Turing machine and NPSPACE is the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally,
PSPACE=cupk\inNDSPACE(nk)
NPSPACE=cupk\inNNSPACE(nk)
While it is not known whether P=NP, Savitch's theorem famously showed that PSPACE=NPSPACE. It is also known that
P\subseteqPSPACE
See main article: article and EXPSPACE. The complexity classes EXPSPACE and NEXPSPACE are the space analogues to EXPTIME and NEXPTIME. That is, EXPSPACE is the class of problems solvable in exponential space by a deterministic Turing machine and NEXPSPACE is the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally,
EXPSPACE=cupk\inN
nk | |
DSPACE(2 |
)
NEXPSPACE=cupk\inN
nk | |
NSPACE(2 |
)
Savitch's theorem showed that EXPSPACE=NEXPSPACE. This class is extremely broad: it is known to be a strict superset of PSPACE, NP, and P, and is believed to be a strict superset of EXPTIME.
Complexity classes have a variety of closure properties. For example, decision classes may be closed under negation, disjunction, conjunction, or even under all Boolean operations. Moreover, they might also be closed under a variety of quantification schemes. P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other.
Each class X that is not closed under negation has a complement class co-X, which consists of the complements of the languages contained in X (i.e.
sf{co-X}=\{L|\overline{L}\inX\}
Closure properties are one of the key reasons many complexity classes are defined in the way that they are. Take, for example, a problem that can be solved in
O(n)
O(n1000)
O(n3)
O(n3)\circO(n2)=O(n6)
O(n6)>O(n3)
O(cn)
See also: Reduction (complexity). Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 addition
x+y
x
y
X
Y
f
x\in\Sigma*
x\inX
f(x)\inY
Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problem
X
Y
Y
X
Y
p
x\in\Sigma*
x\inX
p(x)\inY
Note that reductions can be defined in many different ways. Common reductions are Cook reductions, Karp reductions and Levin reductions, and can vary based on resource bounds, such as polynomial-time reductions and log-space reductions.
Reductions motivate the concept of a problem being hard for a complexity class. A problem
X
X
X
X
If a problem
X
X
X
X
Of particular importance is the class of NP-complete problems—the most difficult problems in NP. Because all problems in NP can be polynomial-time reduced to NP-complete problems, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.
See main article: Savitch's theorem. Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem using
f(n)
f(n)2
f(n)>n
NSPACE\left(f\left(n\right)\right)\subseteqDSPACE\left(f\left(n\right)2\right).
Important corollaries of Savitch's theorem are that PSPACE = NPSPACE (since the square of a polynomial is still a polynomial) and EXPSPACE = NEXPSPACE (since the square of an exponential is still an exponential).
These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space.
See main article: article, Time hierarchy theorem and Space hierarchy theorem. By definition of DTIME, it follows that
k1 | |
DTIME(n |
)
k2 | |
DTIME(n |
)
k1\leqk2
k1 | |
O(n |
)\subseteq
k2 | |
O(n |
)
k1\leqk2
The time hierarchy theorem states that
DTIME(f(n))\subsetneqDTIME(f(n)\sdotlog2(f(n)))
The space hierarchy theorem states that
DSPACE(f(n))\subsetneqDSPACE(f(n)\sdotlog(f(n)))
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes that P is strictly contained in EXPTIME, and the space hierarchy theorem establishes that L is strictly contained in PSPACE.
While deterministic and non-deterministic Turing machines are the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular,
These are explained in greater detail below.
See main article: Randomized computation.
A number of important complexity classes are defined using the probabilistic Turing machine, a variant of the Turing machine that can toss random coins. These classes help to better describe the complexity of randomized algorithms.
A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probability
\epsilon
M
L
\epsilon
w
L
Pr[Macceptsw]\geq1-\epsilon
w
L
Pr[Mrejectsw]\geq1-\epsilon
The fundamental randomized time complexity classes are ZPP, RP, co-RP, BPP, and PP.
The strictest class is ZPP (zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demands no error whatsoever.
A slightly looser class is RP (randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is in RP if there is a probabilistic polynomial-time Turing machine
M
M
M
Loosening the error requirements further to allow for two-sided error yields the class BPP (bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language). BPP is the most practically relevant of the probabilistic complexity classes—problems in BPP have efficient randomized algorithms that can be run quickly on real computers. BPP is also at the center of the important unsolved problem in computer science over whether P=BPP, which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown.
The broadest class of efficiently-solvable probabilistic problems is PP (probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings.
ZPP, RP and co-RP are all subsets of BPP, which in turn is a subset of PP. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, and PP simply relaxes the error probability of BPP. ZPP relates to RP and co-RP in the following way:
sf{ZPP}=sf{RP}\capsf{co-RP}
Important randomized space complexity classes include BPL, RL, and RLP.
See main article: Interactive proof system. A number of complexity classes are defined using interactive proof systems. Interactive proofs generalize the proofs definition of the complexity class NP and yield insights into cryptography, approximation algorithms, and formal verification.
Interactive proof systems are abstract machines that model computation as the exchange of messages between two parties: a prover
P
V
P
The class NP is a simple proof system in which the verifier is restricted to being a deterministic polynomial-time Turing machine and the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as the certificate—to the verifier). Put another way, in the definition of the class NP (the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason, NP can also be called dIP (deterministic interactive proof), though it is rarely referred to as such.
It turns out that NP captures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus require probabilistic verifiers, which means that the verifier's questions to the prover are computed using probabilistic algorithms. As noted in the section above on randomized computation, probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probability
\epsilon
The most general complexity class arising out of this characterization is the class IP (interactive polynomial time), which is the class of all problems solvable by an interactive proof system
(P,V)
V
L\inIP
w
L
\Pr[VacceptswafterinteractingwithP]\ge\tfrac{2}{3}
w
L
\Pr[VacceptswafterinteractingwithP]\le\tfrac{1}{3}
An important feature of IP is that it equals PSPACE. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by a deterministic Turing machine with polynomial space resources, and vice versa.
A modification of the protocol for IP produces another important complexity class: AM (Arthur–Merlin protocol). In the definition of interactive proof systems used by IP, the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are called private random coins. The interactive proof system can be constrained so that the coins used by the verifier are public random coins; that is, the prover is able to see the coins. Formally, AM is defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover. AM can be generalized to AM[''k''], where k is the number of messages exchanged (so in the generalized form the standard AM defined above is AM[2]). However, it is the case that for all
k\geq2
AM[k]\subseteqIP[k]
Other complexity classes defined using interactive proof systems include MIP (multiprover interactive polynomial time) and QIP (quantum interactive polynomial time).
See main article: Circuit complexity. An alternative model of computation to the Turing machine is the Boolean circuit, a simplified model of the digital circuits used in modern computers. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model for non-uniform computation (computation in which different input sizes within the same problem use different algorithms).
Formally, a Boolean circuit
C
C
n
n | |
f | |
C:\{0,1\} |
\to\{0,1\}
x1,x2,...,xn
b
b=fC(x1,x2,...,xn)
C
fC
Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size. Languages (the formal representations of decision problems), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by a circuit family. A circuit family is an infinite list of circuits
(C0,C1,C2,...)
Cn
n
L
w
w
L
Cn(w)=1
n
w
w
n
(C0,C1,C2,...)
Cn
w
w
While complexity classes defined using Turing machines are described in terms of time complexity, circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family
(C0,C1,C2,...)
f:N\toN
f(n)
Cn
f
The complexity class P/poly is the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is in
DTIME(t(n))
t
t:N\toN
O(t2(n))
\color{BlueP}\subsetsf{P/poly}
sf{P}\subsetneqsf{P/poly}
P/poly has a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related to P versus NP. For example, if there is any language in NP that is not in P/poly, then
P ≠ NP
P | |
\Sigma | |
2 |
Two subclasses of P/poly that have interesting properties in their own right are NC and AC. These classes are defined not only in terms of their circuit size but also in terms of their depth. The depth of a circuit is the length of the longest directed path from an input node to the output node. The class NC is the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The class AC is defined similarly to NC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits). NC is a notable class because it can be equivalently defined as the class of languages that have efficient parallel algorithms.
The classes BQP and QMA, which are of key importance in quantum information science, are defined using quantum Turing machines.
While most complexity classes studied by computer scientists are sets of decision problems, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting of counting problems, function problems, and promise problems. These are explained in greater detail below.
See main article: Counting problem (complexity). A counting problem asks not only whether a solution exists (as with a decision problem), but asks how many solutions exist. For example, the decision problem
tt{CYCLE}
G
\#tt{CYCLE}
G
Thus, whereas decision problems are represented mathematically as formal languages, counting problems are represented mathematically as functions: a counting problem is formalized as the function
f:\{0,1\}*\toN
w\in\{0,1\}*
f(w)
\#tt{CYCLE}
G\in\{0,1\}*
f(G)
G
Counting problems arise in a number of fields, including statistical estimation, statistical physics, network design, and economics.
See main article: ♯P.
#P is the set of all functions
f:\{0,1\}*\toN
M
w\in\{0,1\}*
f(w)
M
w
And just as NP can be defined both in terms of nondeterminism and in terms of a verifier (i.e. as an interactive proof system), so too can #P be equivalently defined in terms of a verifier. Recall that a decision problem is in NP if there exists a polynomial-time checkable certificate to a given problem instance—that is, NP asks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class #P asks how many such certificates exist. In this context, #P is defined as follows:
#P is the set of functions
f:\{0,1\}*\toN
p:N\toN
V
w\in\{0,1\}*
f(w)=|\{c\in\{0,1\}p(|w|):V(w,c)=1\}|
f(w)
w
f:A\toB
f
R
\Sigma
R\subseteq\Sigma* x \Sigma*
f
x
y
(x,y)\inR
y
f
f(x)
x\in\Sigma*
An important function complexity class is FP, the class of efficiently solvable functions. More specifically, FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. FP can be thought of as the function problem equivalent of P. Importantly, FP provides some insight into both counting problems and P versus NP. If #P=FP, then the functions that determine the number of certificates for problems in NP are efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if #P=FP then P=NP (it is not known whether this holds in the reverse, i.e. whether P=NP implies #P=FP).
Just as FP is the function problem equivalent of P, FNP is the function problem equivalent of NP. Importantly, FP=FNP if and only if P=NP.
See main article: Promise problem. Promise problems are a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problem
L\subseteq\{0,1\}*
M
L
w\in\{0,1\}*
M
\{0,1\}*
Specifically, a promise problem is defined as a pair of non-intersecting sets
(\PiACCEPT,\PiREJECT)
\PiACCEPT\subseteq\{0,1\}*
\PiREJECT\subseteq\{0,1\}*
The input to an algorithm
M
(\PiACCEPT,\PiREJECT)
\PiACCEPT\cup\PiREJECT
\PiACCEPT\cup\PiREJECT
\PiACCEPT
\PiREJECT
\PiACCEPT\cap\PiREJECT=\emptyset
Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promise
\PiACCEPT\cup\PiREJECT=\{0,1\}*
\PiACCEPT
\PiREJECT
\{0,1\}*/\PiACCEPT
L
\PiACCEPT=L
Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given a planar graph, determine whether or not..." This is often stated as a decision problem, where it is assumed that there is some translation schema that takes every string
s\in\{0,1\}*
Promise problems provide an alternate definition for standard complexity classes of decision problems. P, for instance, can be defined as a promise problem:
P is the class of promise problems that are solvable in deterministic polynomial time. That is, the promise problem
(\PiACCEPT,\PiREJECT)
M
x\in\PiACCEPT,M(x)=1
x\in\PiREJECT,M(x)=0
Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a language
L
L=\PiACCEPT
\PiREJECT
\{0,1\}*/\PiACCEPT
Formulating many basic complexity classes like P as promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study of SZK (statistical zero-knowledge).
The following table shows some of the classes of problems that are considered in complexity theory. If class X is a strict subset of Y, then X is shown below Y with a dark line connecting them. If X is a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study of computability theory, but is useful for putting the complexity classes in perspective.