In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973.
Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices
A=\{a1,\ldots,an\}
B=\{b1,\ldots,bn\}
\omegae
\omega(P)
e(a,b)=\sumP:\omega(P)
e(a,b)
With this setup, write
M=\begin{pmatrix}e(a1,b1)&e(a1,b2)& … &e(a1,bn)\ e(a2,b1)&e(a2,b2)& … &e(a2,bn)\ \vdots&\vdots&\ddots&\vdots\ e(an,b1)&e(an,b2)& … &e(an,bn)\end{pmatrix}
An n-tuple of non-intersecting paths from A to B means an n-tuple (P1, ..., Pn) of paths in G with the following properties:
\sigma
\left\{1,2,...,n\right\}
ai
b\sigma(i)
i ≠ j
Given such an n-tuple (P1, ..., Pn), we denote by
\sigma(P)
\sigma
The Lindström–Gessel–Viennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B:
\det(M)=
\sum | |
(P1,\ldots,Pn)\colonA\toB |
sign(\sigma(P))
n | |
\prod | |
i=1 |
\omega(Pi).
That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of
(1,2,\ldots,n)
Pi
ai
b\sigma(i)
In particular, if the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i) and we take the weights to be 1, then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B.
To prove the Lindström–Gessel–Viennot lemma, we first introduce some notation.
An n-path from an n-tuple
(a1,a2,\ldots,an)
(b1,b2,\ldots,bn)
(P1,P2,\ldots,Pn)
Pi
ai
bi
i ≠ j
Given an n-path
P=(P1,P2,\ldots,Pn)
\omega(P)
\omega(P1)\omega(P2) … \omega(Pn)
A twisted n-path from an n-tuple
(a1,a2,\ldots,an)
(b1,b2,\ldots,bn)
(a1,a2,\ldots,an)
\left(b\sigma(1),b\sigma(2),\ldots,b\sigma(n)\right)
\sigma
Sn
\sigma
\sigma(P)
\sigma(P)
Recalling the definition of M, we can expand det M as a signed sum of permutations; thus we obtain
\begin{array}{rcl} \detM&=
&\sum | |
\sigma\inSn |
n | |
sign(\sigma)\prod | |
i=1 |
e(ai,b\sigma(i))\\ &=
&\sum | |
\sigma\inSn |
n | |
sign(\sigma)\prod | |
i=1 |
\sum | |
Pi:ai\tob\sigma(i) |
\omega(Pi)\\ &=
&\sum | |
\sigma\inSn |
sign(\sigma) \sum\{\omega(P):P~an~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b\sigma(1),b\sigma(2),\ldots,b\sigma(n)\right)\}\\ &=&\sum\{sign(\sigma(P))\omega(P):P~atwisted~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b1,b2,...,bn\right)\}\\ &=&\sum\{sign(\sigma(P))\omega(P):P~anon-intersectingtwisted~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b1,b2,...,bn\right)\}\\ &&+\sum\{sign(\sigma(P))\omega(P):P~anentangledtwisted~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b1,b2,...,bn\right)\}\\ &=
&\sum | |
(P1,\ldots,Pn)\colonA\toB |
sign(\sigma(P))\omega(P)\\ &&+\underbrace{ \sum\{sign(\sigma(P))\omega(P):P~anentangledtwisted~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b1,b2,...,bn\right)\}}=0?\\ \end{array}
It remains to show that the sum of
sign(\sigma(P))\omega(P)
l{E}
f:l{E}\longrightarrowl{E}
\omega(f(P))=\omega(P)
sign(\sigma(f(P)))=-sign(\sigma(P))
P\inl{E}
\sum\{sign(\sigma(P))\omega(P):P~anentangledtwisted~n-pathfrom~\left(a1,a2,\ldots,an\right)~to~\left(b1,b2,...,bn\right)\} = \sumP
in the above sum reduces to 0, since its addends cancel each other out (namely, the addend corresponding to each
P\inl{E}
f(P)
Construction of the involution: The idea behind the definition of the involution
f
P=\left(P1,P2,...,Pn\right)
f(P)
P1,P2,...,Pn
i\in\{1,2,\ldots,n\}
Pi
Pi
Pi
Pj
Pi
Pj
\begin{array}{rcl} Pi&\equiv&ai=u0\tou1\tou2\ldotsu\alpha-1\to\overbrace{ u\alpha\tou\alpha+1\ldots\tour=b\sigma(i)}tail~i\\ Pj&\equiv&aj=v0\tov1\tov2\ldotsv\beta-1\to\underbrace{ v\beta\tov\beta+1\ldots\tovs=b\sigma(j)}tail~j\\ \end{array}
where
\sigma=\sigma(P)
\alpha
\beta
\alpha
Pi
\beta
Pj
v=u\alpha=v\beta
\alphaP=\alpha
\betaP=\beta
iP=i
jP=j
f(P)
P
i
j
\begin{array}{rcl} P'i&\equiv&ai=u0\tou1\tou2\ldotsu\alpha-1\to\overbrace{
v | |
\betaP |
\to
v | |
\betaP+1 |
\ldots\tovs=b\sigma(j)}tail~j\\ P'j&\equiv&aj=v0\tov1\tov2\ldotsv\beta-1\to\underbrace{
u | |
\alphaP |
\to
u | |
\alphaP+1 |
\ldots\tour=b\sigma(i)}tail~i\\ \end{array}
It is immediately clear that
f(P)
if(P)=iP
jf(P)=jP
\alphaf(P)=\alphaP
\betaf(P)=\betaP
f
f(P)
f(P)i,f(P)j
f(f(P))=P
f
From the construction one can see that
\sigma(f(P))
\sigma=\sigma(P)
\sigma(i)
\sigma(j)
sign(\sigma(f(P)))=-sign(\sigma(P))
\omega(f(P))=\omega(P)
\begin{array}{rcl} \omega(P'i)\omega(P'j) &=
\alpha-1 | |
&\left(\prod | |
t=0 |
\omega(ut,ut+1) ⋅
s-1 | |
\prod | |
t=\beta |
\omega(vt,vt+1)\right)
\beta-1 | |
⋅ \left(\prod | |
t=0 |
\omega(vt,vt+1) ⋅
r-1 | |
\prod | |
t=\alpha |
\omega(ut,ut+1)\right)\\ &=
r-1 | |
&\prod | |
t=0 |
\omega(ut,ut+1
s-1 | |
) ⋅ \prod | |
t=0 |
\omega(vt,vt+1)\\ &=&\omega(Pi)\omega(Pj).\\ \end{array}
Hence
n | |
\omega(f(P))=\prod | |
k=1 |
\omega(f(P)k
n | |
)=\prod | |
k=1,~k ≠ i,j |
\omega(Pk) ⋅ \omega(P'i)\omega(P'j
n | |
)=\prod | |
k=1,~k ≠ i,j |
\omega(Pk) ⋅ \omega(Pi)\omega(Pj
n | |
)=\prod | |
k=1 |
\omega(Pk)=\omega(P)
Thus we have found an involution with the desired properties and completed the proof of the Lindström–Gessel–Viennot lemma.
Remark. Arguments similar to the one above appear in several sources, with variations regarding the choice of which tails to switch. A version with j smallest (unequal to i) rather than largest appears in the Gessel-Viennot 1989 reference (proof of Theorem 1).
The Lindström–Gessel–Viennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition
λ=λ1+ … +λr
sλ(x1,\ldots,xn)
sλ(x1,\ldots,xn)=\sumTw(T),
x1x3
3 | |
x | |
4 |
x5x6x7
sλ(x1,\ldots,xn)=\det\left(
(h | |
λi+j-i |
r x r | |
) | |
i,j |
\right),
s(3,2,2,1)=\begin{vmatrix}h3&h4&h5&h6\ h1&h2&h3&h4\ 1&h1&h2&h3\ 0&0&1&h1\end{vmatrix}.
To prove the equivalence, given any partition λ as above, one considers the r starting points
ai=(r+1-i,1)
bi=(λi+r+1-i,n)
Z2
On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence. (See also §4.5 in Sagan's book, or the First Proof of Theorem 7.16.1 in Stanley's EC2, or §3.3 in Fulmek's arXiv preprint, or §9.13 in Martin's lecture notes, for slight variations on this argument.)
One can also use the Lindström–Gessel–Viennot lemma to prove the Cauchy–Binet formula, and in particular the multiplicativity of the determinant.
The acyclicity of G is an essential assumption in the Lindström–Gessel–Viennot lemma; it guarantees (in reasonable situations) that the sums
e(a,b)
e(a,b)