LowerUnits explained

In proof compression LowerUnits (LU) is an algorithm used to compress propositional logic resolution proofs. The main idea of LowerUnits is to exploit the following fact:[1]

Theorem: Let

\varphi

be a potentially redundant proof, and

η

be the redundant proof | redundant node. If

η

’s clause is a unit clause, then

\varphi

is redundant.

The algorithm targets exactly the class of global redundancy stemming from multiple resolutions with unit clauses. The algorithm takes its name from the fact that, when this rewriting is done and the resulting proof is displayed as a DAG (directed acyclic graph), the unit node

η

appears lower (i.e., closer to the root) than it used to appear in the original proof.

A naive implementation exploiting theorem would require the proof to be traversed and fixed after each unit node is lowered. It is possible, however, to do better by first collecting and removing all the unit nodes in a single traversal, and afterwards fixing the whole proof in a single second traversal. Finally, the collected and fixed unit nodes have to be reinserted at the bottom of the proof.

Care must be taken with cases when a unit node

η\prime

occurs above in the subproof that derives another unit node

η

. In such cases,

η

depends on

η\prime

. Let

\ell

be the single literal of the unit clause of

η\prime

. Then any occurrence of

\overline{\ell}

in the subproof above

η

will not be cancelled by resolution inferences with

η\prime

anymore. Consequently,

\overline{\ell}

will be propagated downwards when the proof is fixed and will appear in the clause of

η

. Difficulties with such dependencies can be easily avoided if we reinsert the upper unit node

η\prime

after reinserting the unit node

η

(i.e. after reinsertion,

η\prime

must appear below

η

, to cancel the extra literal

\overline{\ell}

from

η

’s clause). This can be ensured by collecting the unit nodes in a queue during a bottom-up traversal of the proof and reinserting them in the order they were queued.

The algorithm for fixing a proof containing many roots performs a top-down traversal of the proof, recomputing the resolvents and replacing broken nodes (e.g. nodes having deletedNodeMarker as one of their parents) by their surviving parents (e.g. the other parent, in case one parent was deletedNodeMarker).

When unit nodes are collected and removed from a proof of a clause

\kappa

and the proof is fixed, the clause

\kappa\prime

in the root node of the new proof is not equal to

\kappa

anymore, but contains (some of) the duals of the literals of the unit clauses that have been removed from the proof. The reinsertion of unit nodes at the bottom of the proof resolves

\kappa\prime

with the clauses of (some of) the collected unit nodes, in order to obtain a proof of

\kappa

again.

Algorithm

General structure of the algorithm

Input: A proof

\psi

Output: A proof

\psi\prime

with no global redundancy with unit redundant node

(unitsQueue,

\psib

) ← collectUnits(

\psi

);

\psif

← fix(

\psib

); fixedUnitsQueue ← fix(unitsQueue);

\psi\prime

← reinsertUnits(

\psif

, fixedUnitsQueue); return

\psi\prime

;

We collect the unit clauses as follow

Input: A proof

\psi

Output: A pair containing a queue of all unit nodes (unitsQueue) that are used more than once in

\psi

and a broken proof

\psib

\psib

\psi

; traverse

\psib

bottom-up and foreach node

η

in

\psib

do if

η

is unit and

η

has more than one child then add

η

to unitsQueue; remove

η

from

\psib

; end end return (unitsQueue,

\psib

);

Then we reinsert the units

Input: A proof

\psif

(with a single root) and a queue

q

of root nodes Output: A proof

\psi\prime

\psi\prime

\psif

; while

q\emptyset

do

η

← first element of

q

;

q

← tail of

q

; if

η

is resolvable with root of

\psi\prime

then

\psi\prime

← resolvent of

η

with the root of

\psi\prime

; end end return

\psi\prime

;

Notes and References

  1. Fontaine, Pascal; Merz, Stephan; Woltzenlogel Paleo, Bruno. Compression of Propositional Resolution Proofs via Partial Regularization. 23rd International Conference on Automated Deduction, 2011.