Persistent array explained

In computer science, and more precisely regarding data structures, a persistent array is a persistent data structure with properties similar to a (non-persistent) array. That is, after a value's update in a persistent array, there exist two persistent arrays: one persistent array in which the update is taken into account, and one which is equal to the array before the update.

Difference between persistent arrays and arrays

An array

ar=[e0,...,en-1]

is a data structure,with a fixed number n of elements

e0,..., en-1

. It is expected that, given the array ar and anindex

0\lei<n

, the value

ei

can beretrieved quickly. This operation is called alookup. Furthermore, given the array ar, an index

0\lei<n

and a new value v, a new array ar2 withcontent

[e0,...,ei-1,v,ei+1,...,en-1]

canbe created quickly. This operation is called an update. Themain difference between persistent and non-persistent arrays beingthat, in non-persistent arrays, the array ar is destroyed duringthe creation of ar2.

For example, consider the following pseudocode. array = [0, 0, 0] updated_array = array.update(0, 8) other_array = array.update(1, 3) last_array = updated_array.update(2, 5)At the end of execution, the value of array is still [0, 0, 0], thevalue of updated_array is [8, 0, 0], the value of other_arrayis [0, 3, 0], and the value of last_array is [8, 0, 5].

There exist two kinds of persistent arrays. A persistent array may beeither partially or fully persistent. A fully persistentarray may be updated an arbitrary number of times while a partiallypersistent array may be updated at most once. In our previous example,if array were only partially persistent, the creation ofother_array would be forbidden; however, the creation oflast_array would still be valid. Indeed, updated_array is an arraydistinct from array and has never been updated before the creationof last_array.

Lower Bound on Persistent Array Lookup Time

Given that non-persistent arrays support both updates and lookups in constant time, it is natural to ask whether the same is possible with persistent arrays. The following theorem shows that under mild assumptions about the space complexity of the array, lookups must take

\Omega(loglogn)

time in the worst case, regardless of update time, in the cell-probe model.

Implementations

In this section,

n

is the number of elements of the array, and

m

is the number of updates.

Worst case log-time

The most straightforward implementation of a fully persistent array uses an arbitrary persistent map, whose keys are the numbers from 0 to n − 1. A persistent map may be implemented using a persistent balanced tree, in which case both updates and lookups would take

O(logn)

time. This implementation is optimal for the pointer machine model.

Shallow binding

A fully persistent array may be implemented using an array and theso-called Baker's trick.[1] This implementation is used in the OCaml module parray.ml[2] by Jean-Christophe Filliâtre.

In order to define this implementation, a few other definitions mustbe given. An initial array is an array that is not generated byan update on another array. A child of an array ar is anarray of the form ar.update(i,v), and ar is the parentof ar.update(i,v). A descendant of an array ar is eitherar or the descendant of a child of ar. The initial arrayof an array ar is either ar if ar is initial, or it is theinitial array of the parent of ar. That is, the initial array ofar is the unique array init such that

ar= init.update(i0,v0).....update(im,vm)

, with ar initialand

i0,...,im

an arbitrary sequence of indexes and

v0,...,vm

an arbitrary sequence of value. Afamily of arrays is thus a set of arrays containing an initialarray and all of its descendants. Finally, the tree of a family ofarrays is the tree whose nodes are thearrays, and with an edge e from ar to each of its childrenar.update(i,v).

A persistent array using Baker's trick consists of a pair withan actual array called array and the tree of arrays. This treeadmits an arbitrary root - not necessarily the initial array. Theroot may be moved to an arbitrary node of the tree. Changing the rootfrom root to an arbitrary node ar takes time proportional tothe depth of ar. That is, in the distance between root andar. Similarly, looking up a value takes time proportional to thedistance between the array and the root of its family. Thus, if thesame array ar may be lookup multiple times, it is more efficientto move the root to ar before doing the lookup. Finally updatingan array only takes constant time.

Technically, given two adjacent arrays ar1 and ar2, withar1 closer to the root than ar2, the edge from ar1 toar2 is labelled by (i,ar2[i]), where i the only positionwhose value differ between ar1 and ar2.

Accessing an element i of an array ar is done as follows. Ifar is the root, then ar[i] equals root[i]. Otherwise, lete the edge leaving ar toward the root. If the label of eis (i,v) then ar[i] equals v. Otherwise, let ar2 bethe other node of the edge e. Then ar[i] equalsar2[i]. The computation of ar2[i] is done recursively usingthe same definition.

The creation of ar.update(i,v) consists in adding a new nodear2 to the tree, and an edge e from ar to ar2 labelledby (i,v).

Finally, moving the root to a node ar is done as follows. Ifar is already the root, there is nothing to do. Otherwise, lete the edge leaving ar toward the current root, (i,v) itslabel and ar2 the other end of e. Moving the root to ar isdone by first moving the root to ar2, changing the label of eto (i, ar2[i]), and changing array[i] to v.

Updates take

O(1)

time. Lookups take

O(1)

time if the root is the array being looked up, but

\Theta(m)

time in the worst case.

Expected amortized log-log-time

In 1989, Dietz[3] gave an implementation of fully persistent arrays using

O(m+n)

space such that lookups can be done in

O(loglogm)

worst-case time, and updates can be done in

O(loglogm)

expected amortized time. By the lower bound from the previous section, this time complexity for lookup is optimal when

m=n\gamma

for

\gamma\in(1,2]

. This implementation is related to the order-maintenance problem and involves vEB trees, one for the entire array and one for each index.

Straka showed that the times for both operations can be (slightly) improved to

O(loglogmin(m,n))

.

Worst case log-log-time

Straka showed how to achieve

O((loglogm)2/logloglogm)

worst-case time and linear (

O(m+n)

) space, or

O(loglogm)

worst-case time and super-linear space. It remains open whether it is possible to achieve worst-case time

O(loglogm)

subject to linear space.

References

.

Notes and References

  1. Book: Fillâtre. Jean-Christophe . Conchon . Sylvain . A Persistent Union-find Data Structure . 2007 . ACM . New York, NY, USA . 978-1-59593-676-9 . 37–46 .
  2. Web site: Filliâtre . Jean-Christophe . Persistent-array implementation . .
  3. Dietz . Paul F. . 1989 . Fully persistent arrays . Proceedings of the Algorithms and Data Structures . 67–74 . 10.1007/3-540-51542-9_8. 10.1.1.621.1599 .