The ratio of uniforms is a method initially proposed by Kinderman and Monahan in 1977[1] for pseudo-random number sampling, that is, for drawing random samples from a statistical distribution. Like rejection sampling and inverse transform sampling, it is an exact simulation method. The basic idea of the method is to use a change of variables to create a bounded set, which can then be sampled uniformly to generate random variables following the original distribution. One feature of this method is that the distribution to sample is only required to be known up to an unknown multiplicative factor, a common situation in computational statistics and statistical physics.
A convenient technique to sample a statistical distribution is rejection sampling. When the probability density function of the distribution is bounded and has finite support, one can define a bounding box around it (a uniform proposal distribution), draw uniform samples in the box and return only the x coordinates of the points that fall below the function (see graph). As a direct consequence of the fundamental theorem of simulation,[2] the returned samples are distributed according to the original distribution.
When the support of the distribution is infinite, it is impossible to draw a rectangular bounding box containing the graph of the function. One can still use rejection sampling, but with a non-uniform proposal distribution. It can be delicate to choose an appropriate proposal distribution,[3] and one also has to know how to efficiently sample this proposal distribution.
The method of the ratio of uniforms offers a solution to this problem, by essentially using as proposal distribution the distribution created by the ratio of two uniform random variables.
The statement and the proof are adapted from the presentation by Gobet[4]
Af,r
Af,r
f
f(x1,x2,\ldots,
| ||||
x | ||||
d) |
xif(x1,x2,\ldots,
| ||||
x | ||||
d) |
i
Af,r
\tilde{A}f,r
Af,r
\tilde{A}f,r
r
Af,r
Af,r
Af,r
X
(u,v)=
| ||||
\left(f(x) |
,
| ||||
xf(x) |
\right)
Above parameterized only with
r
g:R+ → R+
g(0)=0
Af,g
If
(U,V)
Af,g
V | |
g'(U) |
p
Assume that we want to sample the exponential distribution,
p(x)=λe-λ
r=1
We can start constructing the set
Af,1
Af,1=\left\{(u,v)\inR2:0\lequ\leq\sqrt{p\left(
v | |
u |
\right)}\right\}
The condition
0\lequ\leq\sqrt{p\left(
v | |
u |
\right)}
0\leqv\leq-
u | ln | |
λ |
u2 | |
λ |
This inequality also allows us to determine the rectangular bounding box
\tilde{A}f,1
Af,1
g(u)=-
u | ln | |
λ |
u2 | |
λ |
g\left(\sqrt{λ}\right)=0
g'\left( | 2 |
e\sqrt{λ |
\tilde{A}f,1=\left[0,\sqrt{λ}\right] x \left[0,g\left(
2 | |
e\sqrt{λ |
From here, we can draw pairs of uniform random variables
U\simUnif\left(0,\sqrt{λ}\right)
V\simUnif\left(0,g\left(
2 | |
e\sqrt{λ |
u\leq
| ||||
\sqrt{λe |
v | |
u |
Consider the mixture of two normal distributions
l{D}=0.6N(\mu=-1,\sigma=2)+0.4N(\mu=3,\sigma=1)
r
\tilde{A}f,r
Af,r
u(x)=
| ||||
f(x) |
v(x)=
| ||||
xf(x) |
x
(u,v)\in\tilde{A}f,r
Af,r
v | |
ur |
It is possible to optimize the acceptance ratio by adjusting the value of
r