PCP theorem should not be confused with Post correspondence problem.
In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs (proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits).
The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof.
The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since Cook's theorem"[1] and by Oded Goldreich as "a culmination of a sequence of impressive works […] rich in innovative ideas".[2]
The PCP theorem states that
where PCP[''r''(''n''), ''q''(''n'')] is the class of problems for which a probabilistically checkable proof of a solution can be given, such that the proof can be checked in polynomial time using r(n) bits of randomness and by reading q(n) bits of the proof, correct proofs are always accepted, and incorrect proofs are rejected with probability at least 1/2. n is the length in bits of the description of a problem instance. Note further that the verification algorithm is non-adaptive: the choice of bits of the proof to check depend only on the random bits and the description of the problem instance, not the actual bits of the proof.
An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximate within some constant factor.[3]
Formally, for some constants q and α < 1, the following promise problem (Lyes, Lno) is an NP-hard decision problem:
where Φ is a constraint satisfaction problem (CSP) over a Boolean alphabet with at most q variables per constraint.
The connection to the class PCP mentioned above can be seen by noticing that checking a constant number of bits q in a proof can be seen as evaluating a constraint in q Boolean variables on those bits of the proof. Since the verification algorithm uses O(log n) bits of randomness, it can be represented as a CSP as described above with poly(n) constraints. The other characterisation of the PCP theorem then guarantees the promise condition with α = 1/2: if the NP problem's answer is yes, then every constraint (which corresponds to a particular value for the random bits) has a satisfying assignment (an acceptable proof); otherwise, any proof should be rejected with probability at least 1/2, which means any assignment must satisfy fewer than 1/2 of the constraints (which means it will be accepted with probability lower than 1/2). Therefore, an algorithm for the promise problem would be able to solve the underlying NP problem, and hence the promise problem must be NP hard.
As a consequence of this theorem, it can be shown that the solutions to many natural optimization problems including maximum boolean formula satisfiability, maximum independent set in graphs, and the shortest vector problem for lattices cannot be approximated efficiently unless P = NP. This can be done by reducing the problem of approximating a solution to such problems to a promise problem of the above form. These results are sometimes also called PCP theorems because they can be viewed as probabilistically checkable proofs for NP with some additional structure.
A proof of a weaker result, is given in one of the lectures of Dexter Kozen.[4]
The PCP theorem is the culmination of a long line of work on interactive proofs and probabilistically checkable proofs. The first theorem relating standard proofs and probabilistically checkable proofs is the statement that NEXP ⊆ PCP[poly(''n''), poly(''n'')], proved by .
The notation PCPc(n), s(n)[''r''(''n''), ''q''(''n'')] is explained at probabilistically checkable proof. The notation is that of a function that returns a certain complexity class. See the explanation mentioned above.
The name of this theorem (the "PCP theorem") probably comes either from "PCP" meaning "probabilistically checkable proof", or from the notation mentioned above (or both).
Subsequently, the methods used in this work were extended by Babai, Lance Fortnow, Levin, and Szegedy in 1991,Feige, Goldwasser, Lund, Safra, and Szegedy (1991), and Arora and Safra in 1992 to yield a proof of the PCP theorem by Arora, Lund, Motwani, Sudan, and Szegedy in 1998 .
The 2001 Gödel Prize was awarded to Sanjeev Arora, Uriel Feige, Shafi Goldwasser, Carsten Lund, László Lovász, Rajeev Motwani, Shmuel Safra, Madhu Sudan, and Mario Szegedy for work on the PCP theorem and its connection to hardness of approximation.
In 2005 Irit Dinur discovered a significantly simpler proof of the PCP theorem, using expander graphs.[5] She received the 2019 Gödel Prize for this. [6]
In 2012, Thomas Vidick and Tsuyoshi Ito published a result[7] that showed a "strong limitation on the ability of entangled provers to collude in a multiplayer game". This could be a step toward proving the quantum analogue of the PCP theorem, since when the result[7] was reported in the media,[8] professor Dorit Aharonov called it "the quantum analogue of an earlier paper on multiprover interactive proofs" that "basically led to the PCP theorem".[9]
In 2018, Thomas Vidick and Anand Natarajan proved[10] a games variant of quantum PCP theorem under randomized reduction. It states that, where is a complexity class of multi-prover quantum interactive proofs systems with f(n)-bit classical communications, and the completeness is c and the soundness is s. They also showed that the Hamiltonian version of a quantum PCP conjecture, namely a local Hamiltonian problem with constant promise gap is QMA-hard, implies the games quantum PCP theorem.
NLTS conjecture was a fundamental unresolved obstacle and precursor to a quantum analog of PCP.[11] The NLTS conjecture was proven in 2022 by Anurag Anshu, Nikolas Breuckmann, and Chinmay Nirkhe.[12]