In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds.[1] The bounds are defined by the parameters,
a
b,
[a,b]
(a,b)
U(a,b),
U
X
The probability density function of the continuous uniform distribution is
f(x)=\begin{cases}
1 | |
b-a |
&fora\lex\leb,\\[8pt] 0&forx<a or x>b. \end{cases}
The values of
f(x)
a
b
[c,d],
\tfrac{1}{b-a}.
f(a)
f(b)
\tfrac{1}{2(b-a)},
Any probability density function integrates to
1,
In terms of mean
\mu
\sigma2,
f(x)=\begin{cases}
1 | |
2\sigma\sqrt{3 |
The cumulative distribution function of the continuous uniform distribution is:
F(x)=\begin{cases} 0&forx<a,\\[8pt]
x-a | |
b-a |
&fora\lex\leb,\\[8pt] 1&forx>b. \end{cases}
Its inverse is:
F-1(p)=a+p(b-a) for0<p<1.
In terms of mean
\mu
\sigma2,
F(x)=\begin{cases} 0&forx-\mu<-\sigma\sqrt{3},\\
1 | |
2 |
\left(
x-\mu | |
\sigma\sqrt{3 |
}+1\right)&for-\sigma\sqrt{3}\lex-\mu<\sigma\sqrt{3},\\ 1&forx-\mu\ge\sigma\sqrt{3}; \end{cases}
its inverse is:
F-1(p)=\sigma\sqrt{3}(2p-1)+\mu for0\lep\le1.
X\simU(0,23),
P(2<X<18):
P(2<X<18)=(18-2) ⋅
1 | |
23-0 |
=
16 | |
23 |
.
In a graphical representation of the continuous uniform distribution function
[f(x)vsx],
For a random variable
X\simU(0,23),
P(X>12 | X>8):
P(X>12 | X>8)=(23-12) ⋅
1 | |
23-8 |
=
11 | |
15 |
.
The example above is a conditional probability case for the continuous uniform distribution: given that is true, what is the probability that Conditional probability changes the sample space, so a new interval length has to be calculated, where
b=23
a'=8.
The moment-generating function of the continuous uniform distribution is:
MX=E(etX)=
b | |
\int | |
a |
etx
dx | |
b-a |
=
etb-eta | |
t(b-a) |
=
Bt-At | |
t(b-a) |
,
mk:
m1=
a+b | |
2 |
,
m2=
a2+ab+b2 | |
3 |
,
mk=
| |||||||||
k+1 |
.
For a random variable following the continuous uniform distribution, the expected value is
m1=\tfrac{a+b}{2},
m2-
2 | |
m | |
1 |
=\tfrac{(b-a)2}{12}.
For the special case
a=-b,
f(x)=\begin{cases}
1 | |
2b |
&for-b\lex\leb,\\[8pt] 0&otherwise; \end{cases}
the moment-generating function reduces to the simple form:
MX=
\sinhbt | |
bt |
.
For the
n
\tfrac{Bn}{n},
Bn
n
The continuous uniform distribution with parameters
a=0
b=1,
U(0,1),
One interesting property of the standard uniform distribution is that if
u1
1-u1.
u1
U(0,1),
x=F-1(u1)
x
F.
As long as the same conventions are followed at the transition points, the probability density function of the continuous uniform distribution may also be expressed in terms of the Heaviside step function as:
f(x)=
\operatorname{H | |
(x-a) |
-\operatorname{H}(x-b)}{b-a},
or in terms of the rectangle function as:
f(x)=
1 | |
b-a |
\operatorname{rect}\left(
| ||||||
b-a |
\right).
There is no ambiguity at the transition point of the sign function. Using the half-maximum convention at the transition points, the continuous uniform distribution may be expressed in terms of the sign function as:
f(x)=
sgn{(x-a) | |
- |
sgn{(x-b)}}{2(b-a)}.
The mean (first raw moment) of the continuous uniform distribution is:
E(X)=
b | |
\int | |
a |
x
dx | |
b-a |
=
b2-a2 | |
2(b-a) |
=
b+a | |
2 |
.
The second raw moment of this distribution is:
E(X2)=
b | |
\int | |
a |
x2
dx | |
b-a |
=
b3-a3 | |
3(b-a) |
.
In general, the
n
E(Xn)=
b | |
\int | |
a |
xn
dx | |
b-a |
=
bn+1-an+1 | |
(n+1)(b-a) |
.
The variance (second central moment) of this distribution is:
V(X)=E\left((X-E(X))2\right)=
b | |
\int | |
a |
\left(x-
a+b | |
2 |
\right)2
dx | |
b-a |
=
(b-a)2 | |
12 |
.
Let
X1,...,Xn
U(0,1),
X(k)
k
X(k)
k
The expected value is:
\operatorname{E}(X(k))={k\overn+1}.
This fact is useful when making Q–Q plots.
The variance is:
\operatorname{V}(X(k))={k(n-k+1)\over(n+1)2(n+2)}.
The probability that a continuously uniformly distributed random variable falls within any interval of fixed length is independent of the location of the interval itself (but it is dependent on the interval size
(\ell)
Indeed, if
X\simU(a,b)
[x,x+\ell]
[a,b]
\ell>0,
P(X\in[x,x+\ell]) =
x+\ell | |
\int | |
x |
dy | |
b-a |
=
\ell | |
b-a |
,
x.
This distribution can be generalized to more complicated sets than intervals. Let
S
λ(S),
0<λ(S)<+infty.
S
S
\tfrac{1}{λ(S)}
S.
See main article: German tank problem. Given a uniform distribution on
[0,b]
b,
\hat{b}UMVU=
k+1 | |
k |
m=m+
m | |
k |
,
m
k
The method of moments estimator is:
\hat{b}MM=2\bar{X},
\bar{X}
The maximum likelihood estimator is:
\hat{b}ML=m,
m
m=X(n),
Given a uniform distribution on
[a,b]
\hataML=min\{X1,...,Xn\}
The midpoint of the distribution,
\tfrac{a+b}{2},
Let
X1,X2,X3,...,Xn
U[0,L],
L
X(n)=max(X1,X2,X3,...,Xn)
f=
| |||||||
dλ |
:
f(t)=n
1 | |
L |
\left(
t | |
L |
\right)n-1=n
tn-1 | |
Ln |
11[0,L](t),
11[0,L]
[0,L].
The confidence interval given before is mathematically incorrect, as
\Pr([\hat{\theta},\hat{\theta}+\varepsilon]\ni\theta)\ge1-\alpha
\varepsilon
\theta
\Pr([\hat{\theta},\hat{\theta}(1+\varepsilon)]\ni\theta)\ge1-\alpha
\varepsilon\ge(1-\alpha)-1/n-1
\theta;
\varepsilon
\hat{\theta}.
The probabilities for uniform distribution function are simple to calculate due to the simplicity of the function form. Therefore, there are various applications that this distribution can be used for as shown below: hypothesis testing situations, random sampling cases, finance, etc. Furthermore, generally, experiments of physical origin follow a uniform distribution (e.g. emission of radioactive particles). However, it is important to note that in any application, there is the unchanging assumption that the probability of falling in an interval of fixed length is constant.
In the field of economics, usually demand and replenishment may not follow the expected normal distribution. As a result, other distribution models are used to better predict probabilities and trends such as Bernoulli process.[9] But according to Wanke (2008), in the particular case of investigating lead-time for inventory management at the beginning of the life cycle when a completely new product is being analyzed, the uniform distribution proves to be more useful. In this situation, other distribution may not be viable since there is no existing data on the new product or that the demand history is unavailable so there isn't really an appropriate or known distribution. The uniform distribution would be ideal in this situation since the random variable of lead-time (related to demand) is unknown for the new product but the results are likely to range between a plausible range of two values. The lead-time would thus represent the random variable. From the uniform distribution model, other factors related to lead-time were able to be calculated such as cycle service level and shortage per cycle. It was also noted that the uniform distribution was also used due to the simplicity of the calculations.
See main article: Inverse transform sampling. The uniform distribution is useful for sampling from arbitrary distributions. A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the CDF is not known in closed form. One such method is rejection sampling.
The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.
In analog-to-digital conversion, a quantization error occurs. This error is either due to rounding or truncation. When the original signal is much larger than one least significant bit (LSB), the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. The RMS error therefore follows from the variance of this distribution.
There are many applications in which it is useful to run simulation experiments. Many programming languages come with implementations to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution.
On the other hand, the uniformly distributed numbers are often used as the basis for non-uniform random variate generation.
If
u
a+(b-a)u
a
b,
While the historical origins in the conception of uniform distribution are inconclusive, it is speculated that the term "uniform" arose from the concept of equiprobability in dice games (note that the dice games would have discrete and not continuous uniform sample space). Equiprobability was mentioned in Gerolamo Cardano's Liber de Ludo Aleae, a manual written in 16th century and detailed on advanced probability calculus in relation to dice.[10]
Ln(a,b)=\prod
n | |
i=1 |
f(X | ||||
|
1[a,b](X1,...,Xn)
= | 1 |
(b-a)n |
1 | |
\{a\leqmin\{X1,...,Xn\ |
\}}1 | |
\{max\{X1,...,Xn\ |
\leqb\}}
n\geq1
1 | |
(b-a)n |
Ln(a,b)
min\{X1,...,Xn\}
a=min\{X1,...,Xn\}
Ln(a,b)