Anti-unification explained

Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions (also called terms) are allowed, and which expressions are considered equal. If variables representing functions are allowed in an expression, the process is called "higher-order anti-unification", otherwise "first-order anti-unification". If the generalization is required to have an instance literally equal to each input expression, the process is called "syntactical anti-unification", otherwise "E-anti-unification", or "anti-unification modulo theory".

An anti-unification algorithm should compute for given expressions a complete and minimal generalization set, that is, a set covering all generalizations and containing no redundant members, respectively. Depending on the framework, a complete and minimal generalization set may have one, finitely many, or possibly infinitely many members, or may not exist at all;[1] it cannot be empty, since a trivial generalization exists in any case. For first-order syntactical anti-unification, Gordon Plotkin[2] [3] gave an algorithm that computes a complete and minimal singleton generalization set containing the so-called "least general generalization" (lgg).

Anti-unification should not be confused with dis-unification. The latter means the process of solving systems of inequations, that is of finding values for the variables such that all given inequations are satisfied.[4] This task is quite different from finding generalizations.

Prerequisites

Formally, an anti-unification approach presupposes

\equiv

on

T

, indicating which terms are considered equal. For higher-order anti-unification, usually

t\equivu

if

t

and

u

are alpha equivalent. For first-order E-anti-unification,

\equiv

reflects the background knowledge about certain function symbols; for example, if

is considered commutative,

t\equivu

if

u

results from

t

by swapping the arguments of

at some (possibly all) occurrences.[5] If there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal.

First-order term

See main article: Term (logic). Given a set

V

of variable symbols, a set

C

of constant symbols and sets

Fn

of

n

-ary function symbols, also called operator symbols, for each natural number

n\geq1

, the set of (unsorted first-order) terms

T

is recursively defined to be the smallest set with the following properties:[6]

f(t1,\ldots,tn)

can be built.For example, if x ∈ V is a variable symbol, 1 ∈ C is a constant symbol, and add ∈ F2 is a binary function symbol, then x ∈ T, 1 ∈ T, and (hence) add(x,1) ∈ T by the first, second, and third term building rule, respectively. The latter term is usually written as x+1, using Infix notation and the more common operator symbol + for convenience.

Higher-order term

See main article: Lambda calculus.

Substitution

See main article: Substitution (logic). A substitution is a mapping

\sigma:V\longrightarrowT

from variables to terms; the notation

\{x1\mapstot1,\ldots,xk\mapstotk\}

refers to a substitution mapping each variable

xi

to the term

ti

, for

i=1,\ldots,k

, and every other variable to itself. Applying that substitution to a term is written in postfix notation as

t\{x1\mapstot1,\ldots,xk\mapstotk\}

; it means to (simultaneously) replace every occurrence of each variable

xi

in the term by

ti

. The result of applying a substitution to a term is called an instance of that term .As a first-order example, applying the substitution

\{x\mapstoh(a,y),z\mapstob\}

to the term
yields
.

Generalization, specialization

If a term

t

has an instance equivalent to a term

u

, that is, if

t\sigma\equivu

for some substitution

\sigma

, then

t

is called more general than

u

, and

u

is called more special than, or subsumed by,

t

. For example,

xa

is more general than

ab

if

is commutative, since then

(xa)\{x\mapstob\}=ba\equivab

.

If

\equiv

is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other.For example,

f(x1,a,g(z1),y1)

is a variant of

f(x2,a,g(z2),y2)

, since

f(x1,a,g(z1),y1)\{x1\mapstox2,y1\mapstoy2,z1\mapstoz2\}=f(x2,a,g(z2),y2)

and

f(x2,a,g(z2),y2)\{x2\mapstox1,y2\mapstoy1,z2\mapstoz1\}=f(x1,a,g(z1),y1)

.However,

f(x1,a,g(z1),y1)

is not a variant of

f(x2,a,g(x2),x2)

, since no substitution can transform the latter term into the former one, although

\{x1\mapstox2,z1\mapstox2,y1\mapstox2\}

achieves the reverse direction.The latter term is hence properly more special than the former one.

A substitution

\sigma

is more special than, or subsumed by, a substitution

\tau

if

x\sigma

is more special than

x\tau

for each variable

x

.For example,

\{x\mapstof(u),y\mapstof(f(u))\}

is more special than

\{x\mapstoz,y\mapstof(z)\}

, since

f(u)

and

f(f(u))

is more special than

z

and

f(z)

, respectively.

Anti-unification problem, generalization set

An anti-unification problem is a pair

\langlet1,t2\rangle

of terms.A term

t

is a common generalization, or anti-unifier, of

t1

and

t2

if

t\sigma1\equivt1

and

t\sigma2\equivt2

for some substitutions

\sigma1,\sigma2

.For a given anti-unification problem, a set

S

of anti-unifiers is called complete if each generalization subsumes some term

t\inS

; the set

S

is called minimal if none of its members subsumes another one.

First-order syntactical anti-unification

The framework of first-order syntactical anti-unification is based on

T

being the set of first-order terms (over some given set

V

of variables,

C

of constants and

Fn

of

n

-ary function symbols) and on

\equiv

being syntactic equality.In this framework, each anti-unification problem

\langlet1,t2\rangle

has a complete, and obviously minimal, singleton solution set

\{t\}

. Its member

t

is called the least general generalization (lgg) of the problem, it has an instance syntactically equal to

t1

and another one syntactically equal to

t2

.Any common generalization of

t1

and

t2

subsumes

t

.The lgg is unique up to variants: if

S1

and

S2

are both complete and minimal solution sets of the same syntactical anti-unification problem, then

S1=\{s1\}

and

S2=\{s2\}

for some terms

s1

and

s2

, that are renamings of each other.

\phi:T x T\longrightarrowV

, that is, a mapping assigning each pair

s,t

of terms an own variable

\phi(s,t)

, such that no two pairs share the same variable.[7] The algorithm consists of two rules:

f(s1,...,sn)\sqcupf(t1,\ldots,tn)

\rightsquigarrow

f(s1\sqcupt1,\ldots,sn\sqcuptn)

s\sqcupt

\rightsquigarrow

\phi(s,t)

if previous rule not applicable

For example,

(0*0)\sqcup(4*4)\rightsquigarrow(0\sqcup4)*(0\sqcup4)\rightsquigarrow\phi(0,4)*\phi(0,4)\rightsquigarrowx*x

; this least general generalization reflects the common property of both inputs of being square numbers.

Plotkin used his algorithm to compute the "relative least general generalization (rlgg)" of two clause sets in first-order logic, which was the basis of the Golem approach to inductive logic programming.

First-order anti-unification modulo theory

Equational theories

First-order sorted anti-unification

Nominal anti-unification

Applications

Higher-order anti-unification

Notes and References

  1. Complete generalization sets always exist, but it may be the case that every complete generalization set is non-minimal.
  2. Gordon D.. Plotkin. A Note on Inductive Generalization. B.. Meltzer. D.. Michie. Machine Intelligence. 5. 153–163. 1970.
  3. Gordon D.. Plotkin. A Further Note on Inductive Generalization. B.. Meltzer. D.. Michie. Machine Intelligence. 6. 101–124. 1971.
  4. Comon referred in 1986 to inequation-solving as "anti-unification", which nowadays has become quite unusual. Hubert. Comon. Sufficient Completeness, Term Rewriting Systems and 'Anti-Unification'. Proc. 8th International Conference on Automated Deduction. Springer. LNCS. 230. 128–140. 1986.
  5. E.g.

    a(bf(x))\equiva(f(x)b)\equiv(bf(x))a\equiv(f(x)b)a

  6. Book: C.C. Chang . H. Jerome Keisler . Model Theory. 1977. 73. North Holland. A. Heyting . H.J. Keisler . A. Mostowski . A. Robinson . P. Suppes . Studies in Logic and the Foundation of Mathematics.
    here: Sect.1.3
  7. From a theoretical viewpoint, such a mapping exists, since both

    V

    and

    T x T

    are countably infinite sets; for practical purposes,

    \phi

    can be built up as needed, remembering assigned mappings

    \langles,t,\phi(s,t)\rangle

    in a hash table.