Primary clustering explained
In computer programming, primary clustering is a phenomenon that causes performance degradation in linear-probing hash tables. The phenomenon states that, as elements are added to a linear probing hash table, they have a tendency to cluster together into long runs (i.e., long contiguous regions of the hash table that contain no free slots). If the hash table is at a load factor of
for some parameter
, then the expected length of the run containing a given element
is
. This causes insertions and negative queries to take expected time
in a linear-probing hash table.
Causes of primary clustering
Primary clustering has two causes:
- Winner keeps winning: The longer that a run becomes, the more likely it is to accrue additional elements. This causes a positive feedback loop that contributes to the clustering effect. However, this alone would not cause the quadratic blowup.[1] [2]
- Joining of runs: A single insertion may not only increase the length of the run that it is in by one, but may instead connect together two runs that were already relatively long. This is what causes the quadratic blowup in expected run length.
Another way to understand primary clustering is by examining the standard deviation on the number of items that hash to a given region within the hash table. Consider a sub-region of the hash table of size
. The expected number of items that hash into the region is
. On the other hand, the standard deviation on the number of such items is
. It follows that, with probability
, the number of items that hash into the region will exceed the size
of the region. Intuitively, this means that regions of size
will often overflow, while larger regions typically will not. This intuition is often used as the starting point for formal analyses of primary clustering.
[3] [4] Effect on performance
Primary clustering causes performance degradation for both insertions and queries in a linear probing hash table. Insertions must travel to the end of a run, and therefore take expected time
. Negative queries (i.e., queries that are searching for an element that turns out not to be present) must also travel to the end of a run, and thus also take expected time
. Positive queries can terminate as soon as they find the element that they are searching for. As a result, the expected time to query a
random element in the hash table is
. However, positive queries to recently-inserted elements (e.g., an element that was just inserted) take expected time
.
These bounds also hold for linear probing with lazy deletions (i.e., using tombstones for deletions), as long as the hash table is rebuilt (and the tombstones are cleared out) semi-frequently. It suffices to perform such a rebuild at least once every
insertions.
Common misconceptions
Many textbooks describe the winner-keeps-winning effect (in which the longer a run becomes, the more likely it is to accrue additional elements) as the sole cause of primary clustering.[5] [6] [7] [8] [9] [10] [11] However, as noted by Knuth, this is not the main cause of primary clustering.
Some textbooks state that the expected time for a positive query is
,
[12] typically citing Knuth. This is true for a query to a
random element. Some positive queries may have much larger expected running times, however. For example, if one inserts an element and then immediately queries that element, the query will take the same amount of time as did the insertion, which is
in expectation.
Techniques for avoiding primary clustering
Ordered linear probing[13] (often referred to as Robin Hood hashing[14]) is a technique for reducing the effects of primary clustering on queries. Ordered linear probing sorts the elements within each run by their hash. Thus, a query can terminate as soon as it encounters any element whose hash is larger than that of the element being queried. This results in both positive and negative queries taking expected time
.
Graveyard hashing is a variant of ordered linear probing that eliminates the asymptotic effects of primary clustering for all operations. Graveyard hashing strategically leaves gaps within runs that future insertions can make use of. These gaps, which can be thought of as tombstones (like those created by lazy deletions), are inserted into the table during semi-regular rebuilds. The gaps then speed up the insertions that take place until the next semi-regular rebuild occurs. Every operation in a graveyard hash table takes expected time
.
Many sources recommend the use of quadratic probing as an alternative to linear probing that empirically avoids the effects of primary clustering.
Notes and References
- Book: Knuth, Donald Ervin . The art of computer programming, volume 3, sorting and searching. . 1997 . Addison-Wesley . 0-201-89683-4 . Reading, Mass. . 527–528 . 36241708.
- Book: Bender . Michael A. . Kuszmaul . Bradley C. . Kuszmaul . William . 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS) . Linear Probing Revisited: Tombstones Mark the Demise of Primary Clustering . February 2022 . http://dx.doi.org/10.1109/focs52979.2021.00115 . 1171–1182 . IEEE . 10.1109/focs52979.2021.00115. 978-1-6654-2055-6 . 235731820 .
- Book: Pagh . Anna . Pagh . Rasmus . Ruzic . Milan . Proceedings of the thirty-ninth annual ACM symposium on Theory of computing . Linear probing with constant independence . 2007-06-11 . http://dx.doi.org/10.1145/1250790.1250839 . 318–327 . New York, NY, USA . ACM . 10.1145/1250790.1250839. 9781595936318 . 7523004 .
- Thorup . Mikkel . Zhang . Yin . January 2012 . Tabulation-Based 5-Independent Hashing with Applications to Linear Probing and Second Moment Estimation . SIAM Journal on Computing . 41 . 2 . 293–331 . 10.1137/100800774 . 0097-5397.
- Book: Cormen, Thomas H. . Introduction to algorithms . 2022 . Charles Eric Leiserson, Ronald L. Rivest, Clifford Stein . 978-0-262-04630-5 . Fourth . Cambridge, Massachusetts . 1264174621.
- Book: Drozdek, Adam . Data structures in C . 1995 . PWS Pub. Co . 0-534-93495-1 . 31077222.
- Book: Kruse, Robert L. . Data structures and program design . 1987 . Prentice-Hall . 0-13-195884-4 . 2nd . Englewood Cliffs, N.J. . 13823328.
- Book: McMillan, Michael . Data structures and algorithms with JavaScript . 2014 . O'Reilly . 978-1-4493-6493-9 . Sebastopol, CA . 876268837.
- Book: Smith, Peter, February 1- . Applied data structures with C++ . 2004 . Jones and Bartlett Publishers . 0-7637-2562-5 . Sudbury, Mass. . 53138521.
- Book: Tremblay, Jean-Paul . An introduction to data structures with applications . 1976 . McGraw-Hill . P. G. Sorenson . 0-07-065150-7 . New York . 1858301.
- Book: Handbook of Data Structures and Applications . 2020 . CRC PRESS . 978-0-367-57200-6 . [S.l.] . 1156995269.
- Book: Sedgewick, Robert . Algorithms in C . 1998 . 0-201-31452-5 . Third . Reading, Massachusetts . 37141168.
- Amble, Knuth . 1974 . Ordered hash tables . The Computer Journal . 17. 2 . 135–142 . 10.1093/comjnl/17.2.135 . free .
- Celis, Pedro, Per-Ake Larson, and J. Ian Munro. "Robin hood hashing." 26th Annual Symposium on Foundations of Computer Science (sfcs 1985). IEEE, 1985.