In mathematics, the Gauss–Kuzmin distribution is a discrete probability distribution that arises as the limit probability distribution of the coefficients in the continued fraction expansion of a random variable uniformly distributed in (0, 1). The distribution is named after Carl Friedrich Gauss, who derived it around 1800,[1] and Rodion Kuzmin, who gave a bound on the rate of convergence in 1929.[2] [3] It is given by the probability mass function
p(k)=-log2\left(1-
1 | |
(1+k)2 |
\right)~.
Let
x=\cfrac{1}{k1+\cfrac{1}{k2+ … }}
be the continued fraction expansion of a random number x uniformly distributed in (0, 1). Then
\limnP\left\{kn=k\right\}=-log2\left(1-
1 | |
(k+1)2 |
\right)~.
Equivalently, let
xn=\cfrac{1}{kn+1+\cfrac{1}{kn+2+ … }}~;
then
\Deltan(s)=P\left\{xn\leqs\right\}-log2(1+s)
tends to zero as n tends to infinity.
In 1928, Kuzmin gave the bound
|\Deltan(s)|\leqC\exp(-\alpha\sqrt{n})~.
In 1929, Paul Lévy[4] improved it to
|\Deltan(s)|\leqC0.7n~.
Later, Eduard Wirsing showed[5] that, for λ = 0.30366... (the Gauss–Kuzmin–Wirsing constant), the limit
\Psi(s)=\limn
\Deltan(s) | |
(-λ)n |
exists for every s in [0, 1], and the function Ψ(s) is analytic and satisfies Ψ(0) = Ψ(1) = 0. Further bounds were proved by K. I. Babenko.[6]