Iterated logarithm explained

In computer science, the iterated logarithm of

n

, written  

n

(usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to

1

. The simplest formal definition is the result of this recurrence relation:

log*n:= \begin{cases} 0&ifn\le1;\\ 1+log*(logn)&ifn>1 \end{cases}

In computer science, is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base

2

) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than

e1/e1.444667

, not only for base

2

and base e. The "super-logarithm" function

slogb(n)

is "essentially equivalent" to the base

b

iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.[1]

Analysis of algorithms

The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:

The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:

= \underbrace_y \gg \underbrace_n

the inverse grows much slower:

*
log
b

x\ll

n
log
b

x

.

For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.

The base-2 iterated logarithm! x !!  x
0
1
2
3
4
5

Higher bases give smaller iterated logarithms.

Other applications

The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is

O(log*n)

.

In computational complexity theory, Santhanam[5] shows that the computational resources DTIMEcomputation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to

n\sqrt{log*n}.

See also

Notes and References

  1. Furuya . Isamu . Kida . Takuya . 10.3390/a12080159 . 8 . Algorithms . 3998658 . 159 . Compaction of Church numerals . 12 . 2019. 159 . free .
  2. Devillers . Olivier . 10.1142/S021819599200007X . . 2 . 1 . March 1992 . 97–111 . 1159844 . Randomization yields simple

    O(nlog\astn)

    algorithms for difficult

    \Omega(n)

    problems
    . 60203 . cs/9810007 .
  3. Alon . Noga . Noga Alon . Azar . Yossi . 10.1137/0218017 . . 18 . 2 . April 1989 . 258–267 . 986665 . Finding an approximate maximum .
  4. Cole . Richard . Richard J. Cole . Vishkin . Uzi . Uzi Vishkin . 10.1016/S0019-9958(86)80023-7 . free . . 70 . 1 . 32–53 . 853994 . Deterministic coin tossing with applications to optimal parallel list ranking . July 1986 .
  5. Santhanam . Rahul . On separators, segregators and time versus space . https://scholar.archive.org/work/jsi2cizbpbcsrkq3annbprthsm/access/wayback/http://homepages.inf.ed.ac.uk/rsanthan/Papers/segsepfinal.pdf . 10.1109/CCC.2001.933895 . 286–294 . . Proceedings of the 16th Annual IEEE Conference on Computational Complexity, Chicago, Illinois, USA, June 18-21, 2001 . Computational Complexity Conference . 2001. 0-7695-1053-1 .