In backtracking algorithms, backjumping is a technique that reduces search space, therefore increasing efficiency. While backtracking always goes up one level in the search tree when all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variables
x1,\ldots,xn
Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. If
x1=a1,\ldots,xk=ak
xk+1
x1=a1,\ldots,xk=ak
xk
xk
The partial assignment is not always necessary in full to prove that no value of
xk+1
j<k
x1,\ldots,xj=a1,\ldots,aj
xk+1
xj
xk
The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump from
xk+1
xj
x1,\ldots,xj
xk+1
j
Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different cost, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree.
The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching. In constraint satisfaction, a partial evaluation is consistent if and only if it satisfies all constraints involving the assigned variables, and inconsistent otherwise. It might be the case that a consistent partial solution cannot be extended to a consistent complete solution because some of the unassigned variables may not be assigned without violating other constraints.
The condition in which all values of a given variable
xk+1
x1,\ldots,xk=a1,\ldots,ak
xk+1
The backjumping algorithm by John Gaschnig does a backjump only in leaf dead ends. In other words, it works differently from backtracking only when every possible value of
xk+1
A safe jump can be found by simply evaluating, for every value
ak+1
x1,\ldots,xk=a1,\ldots,ak
xk+1=ak+1
ak+1
xk+1
x1=a1 | ... | xk-1=ak-1 | xk=ak | xk+1=ak+1 | |
x1=a1 | ... | xk-1=ak-1 | xk+1=ak+1 | ||
... | |||||
x1=a1 | xk+1=ak+1 |
The smallest index (lowest the listing) for which evaluations are inconsistent would be a safe jump if
xk+1=ak+1
xk+1
In practice, the algorithm can check the evaluations above at the same time it is checking the consistency of
xk+1=ak+1
The previous algorithm only backjumps when the values of a variable can be shown inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree.
An internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the previous algorithm always backtracks: no backjump is done in this case.
Backjumping at internal nodes cannot be done as for leaf nodes. Indeed, if some evaluations of
xk+1
In such cases, what proved an evaluation
xk+1=ak+1
x1,\ldots,xk
This return is due to a number of dead ends, points where the algorithm has proved a partial solution inconsistent. In order to further backjump, the algorithm has to take into account that the impossibility of finding solutions is due to these dead ends. In particular, the safe jumps are indexes of prefixes that still make these dead ends to be inconsistent partial solutions.
In other words, when all values of
xk+1
xi
x1,\ldots,xi
xk+1,xk+2,...
xk+1
Due to the potentially high number of nodes that are in the subtree of
xk+1
xk+1
The second simplification is that nodes in the subtree of
xl
xl
xm
xl
xm
Indeed, if an algorithm went down from node
xl
xm
xl
xm
xl
xm
xm
xl
This fact can be exploited by collecting, in each node, a set of previously assigned variables whose evaluation suffices to prove that no solution exists in the subtree rooted at the node. This set is built during the execution of the algorithm. When retracting from a node, this set is removed the variable of the node and collected in the set of the destination of backtracking or backjumping. Since nodes that are skipped from backjumping are never retracted from, their sets are automatically ignored.
The rationale of graph-based backjumping is that a safe jump can be found by checking which of the variables
x1,\ldots,xk
xk+1,xk+2,...
xi
i>k
k
xi
xk+1
xk+1
The fact that nodes skipped by backjumping can be ignored when considering a further backjump can be exploited by the following algorithm. When retracting from a leaf node, the set of variables that are in constraint with it is created and "sent back" to its parent, or ancestor in case of backjumping. At every internal node, a set of variables is maintained. Every time a set of variables is received from one of its children or descendant, their variables are added to the maintained set. When further backtracking or backjumping from the node, the variable of the node is removed from this set, and the set is sent to the node that is the destination of backtracking or backjumping. This algorithm works because the set maintained in a node collects all variables that are relevant to prove unsatisfiability in the leaves that are descendant of this node. Since sets of variables are only sent when retracing from nodes, the sets collected at nodes skipped by backjumping are automatically ignored.
A still more refined backjumping algorithm, sometimes able to achieve larger backjumps, is based on checking not only the common presence of two variables in the same constraint but also on whether the constraint actually caused inconsistency. In particular, this algorithm collects one of the violated constraints in every leaf. At every node, the highest index of a variable that is in one of the constraints collected at the leaves is a safe jump.
While the violated constraint chosen in each leaf does not affect the safeness of the resulting jump, choosing constraints of highest possible indices increases the highness of the jump. For this reason, conflict-based backjumping orders constraints in such a way constraints over lower indices variables are preferred over constraints on higher index variables.
Formally, a constraint
C
D
C
D
D
C
In a leaf node, the algorithm chooses the lowest index
i
x1,\ldots,xi
k+1
xk+1
In practice, this algorithm is simplified by collecting all indices in a single set, instead of creating a set for every value of
k
Conflict-directed backjumping was proposed for Constraint Satisfaction Problems by Patrick Prosser in his seminal 1993 paper.
. Rina Dechter. Constraint Processing. Morgan Kaufmann. 2003. 1-55860-890-7. registration.