In mathematics and theoretical computer science, entropy compression is an information theoretic method for proving that a random process terminates, originally used by Robin Moser to prove an algorithmic version of the Lovász local lemma.[1] [2]
To use this method, one proves that the history of the given process can be recorded in an efficient way, such that the state of the process at any past time can be recovered from the current state and this record, and such that the amount of additional information that is recorded at each step of the process is (on average) less than the amount of new information randomly generated at each step. The resulting growing discrepancy in total information content can never exceed the fixed amount of information in the current state, from which it follows that the process must eventually terminate. This principle can be formalized and made rigorous using Kolmogorov complexity.[3]
An example given by both Fortnow[3] and Tao concerns the Boolean satisfiability problem for Boolean formulas in conjunctive normal form, with uniform clause size. These problems can be parameterized by two numbers
(k,t)
k
t
2-k
r=k(t-1)
t
r<2k/e
e
r
C
fix
with C
C
C
This algorithm cannot terminate unless the input formula is satisfiable, so a proof that it terminates is also a proof that a solution exists. Each iteration of the outer loop reduces the number of unsatisfied clauses (it causes
C
fix
subroutine terminates or whether it can get into an infinite recursion.[3] To answer this question, consider on the one hand the number of random bits generated in each iteration of the fix
subroutine (
k
n
fix
subroutine (at most mlog2m
m
fix
returned or that it made a recursive call, and which of the r+1
r+2
This information is enough to recover the entire sequence of calls to fix
, including the identity of the clause given as an argument to each call. To do so, progress forward through the call sequence using the stored outermost call arguments and stored records to infer the argument of each call in turn. Once the sequence of recursive calls and their arguments has been recovered, the truth assignments at each stage of this process can also be recovered from the same information. To do so, start from the final truth assignment and then progress backwards through the sequence of randomly reassigned clauses, using the fact that each clause was previously unsatisfiable to uniquely determine the values of all of its variables prior to its random reassignment. Thus, after
f
fix
, the algorithm will have generated fk
mlog2m+n+flog2(r+2)
r
log2(r+2)<k
r<2k-2
fix
subroutine can only perform O(mlog2m+n)
The name "entropy compression" was given to this method in a blog posting by Terence Tao[4] and has since been used for it by other researchers.[5] [6] [7]
Moser's original version of the algorithmic Lovász local lemma, using this method, achieved weaker bounds than the original Lovász local lemma, which was originally formulated as an existence theorem without a constructive method for finding the object whose existence it proves. Later, Moser and Gábor Tardos used the same method to prove a version of the algorithmic Lovász local lemma that matches the bounds of the original lemma.[8]
\Delta
64\Delta
9.62\Delta
4\Delta-4