Predecessor problem explained

In computer science, the predecessor problem involves maintaining a set of items to, given an element, efficiently query which element precedes or succeeds that element in an order. Data structures used to solve the problem include balanced binary search trees, van Emde Boas trees, and fusion trees. In the static predecessor problem, the set of elements does not change, but in the dynamic predecessor problem, insertions into and deletions from the set are allowed.[1]

The predecessor problem is a simple case of the nearest neighbor problem, and data structures that solve it have applications in problems like integer sorting.

Definition

The problem consists of maintaining a set, which contains a subset of integers. Each of these integers can be stored with a word size of, implying that

U\le2w

. Data structures that solve the problem support these operations:[2]

In addition, data structures which solve the dynamic version of the problem also support these operations:

The problem is typically analyzed in a transdichotomous model of computation such as word RAM.

Data structures

One simple solution to this problem is to use a balanced binary search tree, which achieves (in Big O notation) a running time of

O(logn)

for predecessor queries. The Van Emde Boas tree achieves a query time of

O(loglogU)

, but requires

O(U)

space.[1] Dan Willard proposed an improvement on this space usage with the x-fast trie, which requires

O(nlogU)

space and the same query time, and the more complicated y-fast trie, which only requires

O(n)

space.[3] Fusion trees, introduced by Michael Fredman and Willard, achieve

O(logwn)

query time and

O(n)

for predecessor queries for the static problem.[4] The dynamic problem has been solved using exponential trees with

O(logwn+loglogn)

query time,[5] and with expected time

O(logwn)

using hashing.[6]

Mathematical properties

There have been a number of papers proving lower bounds on the predecessor problem, or identifying what the running time of asymptotically optimal solutions would be. For example, Michael Beame and Faith Ellen proved that for all values of, there exists a value of with query time (in Big Theta notation)

\Omega\left(\tfrac{logw}{loglogw}\right)

, and similarly, for all values of, there exists a value of such that the query time is

\Omega\left(\sqrt{\tfrac{logn}{loglogn}}\right)

.[1] Other proofs of lower bounds include the notion of communication complexity.

For the static predecessor problem, Mihai Pătrașcu and Mikkel Thorup showed the following lower bound for the optimal search time, in the cell-probe model:[7] O(1) \min \left\

Notes and References

  1. Beame. Paul. Fich. Faith. Faith Ellen. Optimal Bounds for the Predecessor Problem and Related Problems. Journal of Computer and System Sciences. 65. 1. August 2002. 38–72. 10.1006/jcss.2002.1822. 1991980 . free.
  2. Rahman. Naila. Cole. Richard. Raman. Rajeev. Optimized Predecessor Data Structures for Internal Memory. International Workshop on Algorithm Engineering. 17 August 2001. 67–78.
  3. Willard. Dan. Dan Willard. Log-logarithmic worst-case range queries are possible in space Θ(n). Information Processing Letters. 24 August 1983. 17. 2. 81–84. 10.1016/0020-0190(83)90075-3.
  4. Fredman. Michael. Michael Fredman. Willard. Dan. Dan Willard. Blasting through the information theoretic barrier with fusion trees. Symposium on Theory of Computing. 1–7. 1990.
  5. .
  6. .
  7. Book: Pătraşcu . Mihai . Thorup . Mikkel . Proceedings of the thirty-eighth annual ACM symposium on Theory of Computing . Time-space trade-offs for predecessor search . 21 May 2006 . 232–240 . 10.1145/1132516.1132551. cs/0603043 . 1595931341 . 1232 .