Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation.
Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed.[1] [2] When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information.[3] [4] Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well. Blobs may be counted, filtered, and tracked.
Blob extraction is related to but distinct from blob detection.
A graph, containing vertices and connecting edges, is constructed from relevant input data. The vertices contain information required by the comparison heuristic, while the edges indicate connected 'neighbors'. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. Connectivity is determined by the medium; image graphs, for example, can be 4-connected neighborhood or 8-connected neighborhood.[5]
Following the labeling stage, the graph may be partitioned into subsets, after which the original information can be recovered and processed .
The usage of the term connected-components labeling (CCL) and its definition is quite consistent in the academic literature, whereas connected-components analysis (CCA) varies in terms of both terminology and problem definition.
Rosenfeld et al.[6] define connected components labeling as the “[c]reation of a labeled image in which the positions associated with the same connected component of the binary input image have a unique label.” Shapiro et al.[7] define CCL as an operator whose “input is a binary image and [...] output is a symbolic image in which the label assigned to each pixel is an integer uniquely identifying the connected component to which that pixel belongs.”[8]
There is no consensus on the definition of CCA in the academic literature. It is often used interchangeably with CCL.[9] [10] A more extensive definition is given by Shapiro et al.: “Connected component analysis consists of connected component labeling of the black pixels followed by property measurement of the component regions and decision making.” The definition for connected-component analysis presented here is more general, taking the thoughts expressed in into account.
The algorithms discussed can be generalized to arbitrary dimensions, albeit with increased time and space complexity.
This is a fast and very simple method to implement and understand. It is based on graph traversal methods in graph theory. In short, once the first pixel of a connected component is found, all the connected pixels of that connected component are labelled before going onto the next pixel in the image. This algorithm is part of Vincent and Soille's watershed segmentation algorithm,[11] other implementations also exist.[12]
In order to do that a linked list is formed that will keep the indexes of the pixels that are connected to each other, steps (2) and (3) below. The method of defining the linked list specifies the use of a depth or a breadth first search. For this particular application, there is no difference which strategy to use. The simplest kind of a last in first out queue implemented as a singly linked list will result in a depth first search strategy.
It is assumed that the input image is a binary image, with pixels being either background or foreground and that the connected components in the foreground pixels are desired. The algorithm steps can be written as:
Note that the pixels are labelled before being put into the queue. The queue will only keep a pixel to check its neighbours and add them to the queue if necessary. This algorithm only needs to check the neighbours of each foreground pixel once and doesn't check the neighbours of background pixels.
The pseudocode is : algorithm OneComponentAtATime(data) input : imageData[xDim][yDim] initialization : label = 0, labelArray[xDim][yDim] = 0, statusArray[xDim][yDim] = false, queue1, queue2; for i = 0 to xDim do for j = 0 to yDim do if imageData[i][j] has not been processed do if imageData[i][j] is a foreground pixel do check it four neighbors(north, south, east, west) : if neighbor is not processed do if neighbor is a foreground pixel do add it to the queue1 else update its status as processed end if labelArray[i][j] = label (give label) statusArray[i][j] = true (update status) While queue1 is not empty do For each pixel in the queue do : check it fours neighbors if neighbor is not processed do if neighbor is a foreground pixel do add it to the queue2 else update its status as processed end if give it the current label update its status as processed remove the current element from queue1 copy queue2 into queue1 end While increase the label end if else update its status as processed end if end if end if end for end for
Relatively simple to implement and understand, the two-pass algorithm,[13] (also known as the Hoshen–Kopelman algorithm) iterates through 2-dimensional binary data. The algorithm makes two passes over the image: the first pass to assign temporary labels and record equivalences, and the second pass to replace each temporary label by the smallest label of its equivalence class.
The input data can be modified in situ (which carries the risk of data corruption), or labeling information can be maintained in an additional data structure.
Connectivity checks are carried out by checking neighbor pixels' labels (neighbor elements whose labels are not assigned yet are ignored), or say, the north-east, the north, the north-west and the west of the current pixel (assuming 8-connectivity). 4-connectivity uses only north and west neighbors of the current pixel. The following conditions are checked to determine the value of the label to be assigned to the current pixel (4-connectivity is assumed)
Conditions to check:
The algorithm continues this way, and creates new region labels whenever necessary. The key to a fast algorithm, however, is how this merging is done. This algorithm uses the union-find data structure which provides excellent performance for keeping track of equivalence relationships.[14] Union-find essentially stores labels which correspond to the same blob in a disjoint-set data structure, making it easy to remember the equivalence of two labels by the use of an interface method E.g.: findSet(l). findSet(l) returns the minimum label value that is equivalent to the function argument 'l'.
Once the initial labeling and equivalence recording is completed, the second pass merely replaces each pixel label with its equivalent disjoint-set representative element.
A faster-scanning algorithm for connected-region extraction is presented below.[15]
On the first pass:
On the second pass:
Here, the background is a classification, specific to the data, used to distinguish salient elements from the foreground. If the background variable is omitted, then the two-pass algorithm will treat the background as another region.
1. The array from which connected regions are to be extracted is given below (8-connectivity based).
We first assign different binary values to elements in the graph. The values "0~1" at the center of each of the elements in the following graph are the elements' values, whereas the "1,2,...,7" values in the next two graphs are the elements' labels. The two concepts should not be confused.
2. After the first pass, the following labels are generated:
A total of 7 labels are generated in accordance with the conditions highlighted above.
The label equivalence relationships generated are,
Set ID | Equivalent Labels | |
---|---|---|
1 | 1,2 | |
2 | 1,2 | |
3 | 3,4,5,6,7 | |
4 | 3,4,5,6,7 | |
5 | 3,4,5,6,7 | |
6 | 3,4,5,6,7 | |
7 | 3,4,5,6,7 |
3. Array generated after the merging of labels is carried out. Here, the label value that was the smallest for a given region "floods" throughout the connected region and gives two distinct labels, and hence two distinct labels.
4. Final result in color to clearly see two different regions that have been found in the array.
The pseudocode is:
algorithm TwoPass(data) is linked = [] labels = structure with dimensions of data, initialized with the value of Background NextLabel = 0 First pass for row in data do for column in row do if data[row][column] is not Background then neighbors = connected elements with the current element's value if neighbors is empty then linked[NextLabel] = set containing NextLabel labels[row][column] = NextLabel NextLabel += 1 else Find the smallest label L = neighbors labels labels[row][column] = min(L) for label in L do linked[label] = union(linked[label], L) Second pass for row in data do for column in row do if data[row][column] is not Background then labels[row][column] = find(labels[row][column]) return labels
The find and union algorithms are implemented as described in union find.
Create a region counter
Scan the image (in the following example, it is assumed that scanning is done from left to right and from top to bottom):
Some of the steps present in the two-pass algorithm can be merged for efficiency, allowing for a single sweep through the image. Multi-pass algorithms also exist, some of which run in linear time relative to the number of image pixels.[16]
In the early 1990s, there was considerable interest in parallelizing connected-component algorithms in image analysis applications, due to the bottleneck of sequentially processing each pixel.[17]
The interest to the algorithm arises again with an extensive use of CUDA.
Algorithm:
One-Component-at-a-Time(image) [M, N] := size(image) connected := zeros(M, N) mark := value difference := increment offsets := [-1; M; 1; -M] index := [] no_of_objects := 0 for i: 1:M do for j: 1:N do if (image(i, j)
The run time of the algorithm depends on the size of the image and the amount of foreground. The time complexity is comparable to the two pass algorithm if the foreground covers a significant part of the image. Otherwise the time complexity is lower. However, memory access is less structured than for the two-pass algorithm, which tends to increase the run time in practice.
In the last two decades many novel approaches to connected-component labeling have been proposed, but almost none of them have been subjected to a comparative performance assessment using the same data set. YACCLAB[18] [19] (acronym for Yet Another Connected Components Labeling Benchmark) is an example of C++ open source framework which collects, runs, and tests connected-component labeling algorithms.
The emergence of FPGAs with enough capacity to perform complex image processing tasks also led to high-performance architectures for connected-component labeling.[20] [21] Most of these architectures utilize the single pass variant of this algorithm, because of the limited memory resources available on an FPGA. These types of connected component labeling architectures can process several image pixels in parallel, thereby achieving high throughput and low processing latency.