Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all elements end up being in the same cluster. The method is also known as farthest neighbour clustering. The result of the clustering can be visualized as a dendrogram, which shows the sequence of cluster fusion and the distance at which each fusion took place.[1] [2] [3]
At each step, the two clusters separated by the shortest distance are combined. The definition of 'shortest distance' is what differentiates between the different agglomerative clustering methods. In complete-linkage clustering, the link between two clusters contains all element pairs, and the distance between clusters equals the distance between those two elements (one in each cluster) that are farthest away from each other. The shortest of these links that remains at any step causes the fusion of the two clusters whose elements are involved.
Mathematically, the complete linkage function - the distance
D(X,Y)
X
Y
D(X,Y)=maxx\ind(x,y)
where
d(x,y)
x\inX
y\inY
X
Y
The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The
N x N
The complete linkage clustering algorithm consists of the following steps:
L(0)=0
m=0
(r),(s)
d[(r),(s)]=maxd[(i),(j)]
m=m+1
(r)
(s)
m
L(m)=d[(r),(s)]
D
(r)
(s)
(r,s)
(k)
d[(r,s),(k)]=max\{d[(k),(r)],d[(k),(s)]\}
The algorithm explained above is easy to understand but of complexity
O(n3)
O(n2)
The working example is based on a JC69 genetic distance matrix computed from the 5S ribosomal RNA sequence alignment of five bacteria: Bacillus subtilis (
a
b
c
d
e
Let us assume that we have five elements
(a,b,c,d,e)
D1
a | b | c | d | e | ||
---|---|---|---|---|---|---|
a | 0 | style=background:#ffffcc; | 17 | 21 | 31 | 23 |
b | style=background:#ffffcc; | 17 | 0 | 30 | 34 | 21 |
c | 21 | 30 | 0 | 28 | 39 | |
d | 31 | 34 | 28 | 0 | 43 | |
e | 23 | 21 | 39 | 43 | 0 |
In this example,
D1(a,b)=17
D1
a
b
Let
u
a
b
\delta(a,u)=\delta(b,u)=D1(a,b)/2
a
b
u
a
b
u
\delta(a,u)=\delta(b,u)=17/2=8.5
We then proceed to update the initial proximity matrix
D1
D2
a
b
D2
(a,b)
D2((a,b),c)=max(D1(a,c),D1(b,c))=max(21,30)=30
D2((a,b),d)=max(D1(a,d),D1(b,d))=max(31,34)=34
D2((a,b),e)=max(D1(a,e),D1(b,e))=max(23,21)=23
Italicized values in
D2
We now reiterate the three previous steps, starting from the new distance matrix
D2
(a,b) | c | d | e | |||
---|---|---|---|---|---|---|
(a,b) | 0 | 30 | 34 | style=background:#ffffcc; | 23 | |
c | 30 | 0 | 28 | 39 | ||
d | 34 | 28 | 0 | 43 | ||
e | style=background:#ffffcc; | 23 | 39 | 43 | 0 |
Here,
D2((a,b),e)=23
D2
(a,b)
e
Let
v
(a,b)
e
a
b
v
e
v
\delta(a,v)=\delta(b,v)=\delta(e,v)=23/2=11.5
We deduce the missing branch length:
\delta(u,v)=\delta(e,v)-\delta(a,u)=\delta(e,v)-\delta(b,u)=11.5-8.5=3
We then proceed to update the
D2
D3
(a,b)
e
D3(((a,b),e),c)=max(D2((a,b),c),D2(e,c))=max(30,39)=39
D3(((a,b),e),d)=max(D2((a,b),d),D2(e,d))=max(34,43)=43
We again reiterate the three previous steps, starting from the updated distance matrix
D3
((a,b),e) | c | d | |||
---|---|---|---|---|---|
((a,b),e) | 0 | 39 | 43 | ||
c | 39 | 0 | style=background:#ffffcc; | 28 | |
d | 43 | style=background:#ffffcc; | 28 | 0 |
Here,
D3(c,d)=28
D3
c
d
Let
w
c
d
c
d
w
\delta(c,w)=\delta(d,w)=28/2=14
There is a single entry to update:
D4((c,d),((a,b),e))=max(D3(c,((a,b),e)),D3(d,((a,b),e)))=max(39,43)=43
The final
D4
((a,b),e) | (c,d) | |||
---|---|---|---|---|
((a,b),e) | 0 | style=background:#ffffcc; | 43 | |
(c,d) | style=background:#ffffcc; | 43 | 0 |
So we join clusters
((a,b),e)
(c,d)
Let
r
((a,b),e)
(c,d)
((a,b),e)
(c,d)
r
\delta(((a,b),e),r)=\delta((c,d),r)=43/2=21.5
We deduce the two remaining branch lengths:
\delta(v,r)=\delta(((a,b),e),r)-\delta(e,v)=21.5-11.5=10
\delta(w,r)=\delta((c,d),r)-\delta(c,w)=21.5-14=7.5
The dendrogram is now complete. It is ultrametric because all tips (
a
e
r
\delta(a,r)=\delta(b,r)=\delta(e,r)=\delta(c,r)=\delta(d,r)=21.5
The dendrogram is therefore rooted by
r
Alternative linkage schemes include single linkage clustering and average linkage clustering - implementing a different linkage in the naive algorithm is simply a matter of using a different formula to calculate inter-cluster distances in the initial computation of the proximity matrix and in step 4 of the above algorithm. An optimally efficient algorithm is however not available for arbitrary linkages. The formula that should be adjusted has been highlighted using bold text.
Complete linkage clustering avoids a drawback of the alternative single linkage method - the so-called chaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[7]
- | Single-linkage clustering. | Complete-linkage clustering. | Average linkage clustering: WPGMA. | Average linkage clustering: UPGMA. |