In computer science, a trace is a set of strings, wherein certain letters in the string are allowed to commute, but others are not. It generalizes the concept of a string, by not forcing the letters to always be in a fixed order, but allowing certain reshufflings to take place. Traces were introduced by Pierre Cartier and Dominique Foata in 1969 to give a combinatorial proof of MacMahon's master theorem. Traces are used in theories of concurrent computation, where commuting letters stand for portions of a job that can execute independently of one another, while non-commuting letters stand for locks, synchronization points or thread joins.[1]
The trace monoid or free partially commutative monoid is a monoid of traces. In a nutshell, it is constructed as follows: sets of commuting letters are given by an independency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions up the free monoid (the set of all strings of finite length) into a set of equivalence classes; the result is still a monoid; it is a quotient monoid and is called the trace monoid. The trace monoid is universal, in that all dependency-homomorphic (see below) monoids are in fact isomorphic.
Trace monoids are commonly used to model concurrent computation, forming the foundation for process calculi. They are the object of study in trace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid of dependency graphs; thus allowing algebraic techniques to be applied to graphs, and vice versa. They are also isomorphic to history monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.
Let
\Sigma*
\Sigma
I
\Sigma
\sim
\Sigma*
u\simv
x,y\in\Sigma*
(a,b)\inI
u=xaby
v=xbay
u,v,x
y
\Sigma*
a
b
\Sigma
The trace is defined as the reflexive transitive closure of
\sim
\Sigma*
\equivD
D
I,
D=(\Sigma x \Sigma)\setminusI
I=(\Sigma x \Sigma)\setminusD.
The transitive closure implies that
u\equivv
(w0,w1, … ,wn)
u\simw0
v\simwn
wi\simwi+1
0\lei<n
\Sigma*
\Sigma*
The trace monoid, commonly denoted as
M(D)
M(D)=\Sigma*/\equivD.
The homomorphism
*\to | |
\phi | |
D:\Sigma |
M(D)
is commonly referred to as the natural homomorphism or canonical homomorphism. That the terms natural or canonical are deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.
One will also find the trace monoid denoted as
M(\Sigma,I)
I
Consider the alphabet
\Sigma=\{a,b,c\}
\begin{matrix}D&=&\{a,b\} x \{a,b\} \cup \{a,c\} x \{a,c\}\\ &=&\{a,b\}2\cup\{a,c\}2\\ &=&\{(a,b),(b,a),(a,c),(c,a),(a,a),(b,b),(c,c)\}\end{matrix}
The corresponding independency is
ID=\{(b,c),(c,b)\}
Therefore, the letters
b,c
abababbca
[abababbca]D=\{abababbca, abababcba, ababacbba\}
The equivalence class
[abababbca]D
The cancellation property states that equivalence is maintained under right cancellation. That is, if
w\equivv
(w ÷ a)\equiv(v ÷ a)
w ÷ a
w\equivv
xwy\equivxvy
ua\equivvb
a\neb
(a,b)\inID
u\equivwb
v\equivwa
w\equivv
\pi\Sigma(w)\equiv\pi\Sigma(v)
A strong form of Levi's lemma holds for traces. Specifically, if
uv\equivxy
z1,z2,z3
z4
(w2,w3)\inID
w2\in\Sigma
w3\in\Sigma
w2
z2
w3
z3
u\equivz1z2, v\equivz3z4,
x\equivz1z3, y\equivz2z4.
A dependency morphism (with respect to a dependency D) is a morphism
\psi:\Sigma*\toM
1.
\psi(w)=\psi(\varepsilon)
w=\varepsilon
2.
(a,b)\inID
\psi(ab)=\psi(ba)
3.
\psi(ua)=\psi(v)
\psi(u)=\psi(v ÷ a)
4.
\psi(ua)=\psi(vb)
a\neb
(a,b)\inID
Dependency morphisms are universal, in the sense that for a given, fixed dependency D, if
\psi:\Sigma*\toM
M(D)
There are two well-known normal forms for words in trace monoids. One is the lexicographic normal form, due to Anatolij V. Anisimov and Donald Knuth, and the other is the Foata normal form due to Pierre Cartier and Dominique Foata who studied the trace monoid for its combinatorics in the 1960s.[3]
Unicode's Normalization Form Canonical Decomposition (NFD) is an example of a lexicographic normal form - the ordering is to sort consecutive characters with non-zero canonical combining class by that class.
Just as a formal language can be regarded as a subset of
\Sigma*
M(D)
Alternatively, but equivalently, a language
L\subseteq\Sigma*
L=[L]D
where
[L]D=cupw[w]D
is the trace closure of a set of strings.
General references
Seminal publications