Run-time algorithm specialization explained
In computer science, run-time algorithm specialization is a methodology for creating efficient algorithms for costly computation tasks of certain kinds. The methodology originates in the field of automated theorem proving and, more specifically, in the Vampire theorem prover project.
The idea is inspired by the use of partial evaluation in optimising program translation. Many core operations in theorem provers exhibit the following pattern.Suppose that we need to execute some algorithm
in a situation where a value of
is fixed for potentially many different values of
. In order to do this efficiently, we can try to find a specialization of
for every fixed
, i.e., such an algorithm
, that executing
is equivalent to executing
.
The specialized algorithm may be more efficient than the generic one, since it can exploit some particular properties of the fixed value
. Typically,
can avoid some operations that
would have to perform, if they are known to be redundant for this particular parameter
. In particular, we can often identify some tests that are true or false for
, unroll loops and recursion, etc.
Difference from partial evaluation
The key difference between run-time specialization and partial evaluation is that the values of
on which
is specialised are not known statically, so the
specialization takes place at run-time.
There is also an important technical difference. Partial evaluation is applied to algorithms explicitly represented as codes in some programming language. At run-time, we do not need any concrete representation of
. We only have to
imagine
when we program the specialization procedure.All we need is a concrete representation of the specialized version
. This also means that we cannot use any universal methods for specializing algorithms, which is usually the case with partial evaluation. Instead, we have to program a specialization procedure for every particular algorithm
. An important advantage of doing so is that we can use some powerful
ad hoc tricks exploiting peculiarities of
and the representation of
and
, which are beyond the reach of any universal specialization methods.
Specialization with compilation
The specialized algorithm has to be represented in a form that can be interpreted.In many situations, usually when
is to be computed on many values
in a row, we can write
as a code of a special
abstract machine, and we often say that
is
compiled. Then the code itself can be additionally optimized by answer-preserving transformations that rely only on the semantics of instructions of the abstract machine.
Instructions of the abstract machine can usually be represented as records. One field of such a record stores an integer tag that identifies the instruction type, other fields may be used for storing additional parameters of the instruction, for example a pointer to anotherinstruction representing a label, if the semantics of the instruction requires a jump. All instructions of a code can be stored in an array, or list, or tree.
Interpretation is done by fetching instructions in some order, identifying their typeand executing the actions associated with this type. In C or C++ we can use a switch statement to associate some actions with different instruction tags. Modern compilers usually compile a switch statement with integer labels from a narrow range rather efficiently by storing the address of the statement corresponding to a value
in the
-th cell of a special array. One can exploit thisby taking values for instruction tags from a small interval of integers.
Data-and-algorithm specialization
There are situations when many instances of
are intended for long-term storage and the calls of
occur with different
in an unpredictable order.For example, we may have to check
first, then
, then
, and so on.In such circumstances, full-scale specialization with compilation may not be suitable due to excessive memory usage. However, we can sometimes find a compact specialized representation
for every
, that can be stored with, or instead of,
. We also define a variant
that works on this representation and any call to
is replaced by
, intended to do the same job faster.
See also
References
Further reading
- A. Riazanov and A. Voronkov, "Efficient Checking of Term Ordering Constraints", Proc. IJCAR 2004, Lecture Notes in Artificial Intelligence 3097, 2004 (compact but self-contained illustration of the method)
- A. Riazanov and A. Voronkov, Efficient Instance Retrieval with Standard and Relational Path Indexing, Information and Computation, 199(1-2), 2005 (contains another illustration of the method)
- A. Riazanov, "Implementing an Efficient Theorem Prover", PhD thesis, The University of Manchester, 2003 (contains the most comprehensive description of the method and many examples)