In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values which lie above or below a given threshold or within a specified range. For example, if the dates of birth of children in a school are examined, these would typically be subject to truncation relative to those of all children in the area given that the school accepts only children in a given age range on a specific date. There would be no information about how many children in the locality had dates of birth before or after the school's cutoff dates if only a direct approach to the school were used to obtain information.
Where sampling is such as to retain knowledge of items that fall outside the required range, without recording the actual values, this is known as censoring, as opposed to the truncation here.[1]
The following discussion is in terms of a random variable having a continuous distribution although the same ideas apply to discrete distributions. Similarly, the discussion assumes that truncation is to a semi-open interval y ∈ (a,b] but other possibilities can be handled straightforwardly.
Suppose we have a random variable,
X
f(x)
F(x)
y=(a,b]
X
a<X\leqb
f(x|a<X\leqb)=
g(x) | |
F(b)-F(a) |
=
f(x) ⋅ I(\{a<x\leqb\ | |
)}{F(b)-F(a)} |
\proptoxf(x) ⋅ I(\{a<x\leqb\})
where
g(x)=f(x)
a<x\leqb
g(x)=0
g(x)=f(x) ⋅ I(\{a<x\leqb\})
I
x
Notice that in fact
f(x|a<X\leqb)
b | |
\int | |
a |
f(x|a<X\leqb)dx=
1 | |
F(b)-F(a) |
b | |
\int | |
a |
g(x)dx=1
Truncated distributions need not have parts removed from the top and bottom. A truncated distribution where just the bottom of the distribution has been removed is as follows:
f(x|X>y)=
g(x) | |
1-F(y) |
where
g(x)=f(x)
y<x
g(x)=0
F(x)
A truncated distribution where the top of the distribution has been removed is as follows:
f(x|X\leqy)=
g(x) | |
F(y) |
where
g(x)=f(x)
x\leqy
g(x)=0
F(x)
Suppose we wish to find the expected value of a random variable distributed according to the density
f(x)
F(x)
X
y
E(X|X>y)=
| ||||||||||
1-F(y) |
where again
g(x)
g(x)=f(x)
x>y
g(x)=0
Letting
a
b
f
E(u(X)|X>y)
u
\limyE(u(X)|X>y)=E(u(X))
\limyE(u(X)|X>y)=u(b)
\partial | |
\partialy |
[E(u(X)|X>y)]=
f(y) | |
1-F(y) |
[E(u(X)|X>y)-u(y)]
and
\partial | |
\partialy |
[E(u(X)|X<y)]=
f(y) | |
F(y) |
[-E(u(X)|X<y)+u(y)]
\limy
\partial | |
\partialy |
[E(u(X)|X>y)]=f(a)[E(u(X))-u(a)]
\limy
\partial | |
\partialy |
[E(u(X)|X>y)]=
1 | |
2 |
u'(b)
Provided that the limits exist, that is:
\limyu'(y)=u'(c)
\limyu(y)=u(c)
\limyf(y)=f(c)
c
a
b
The truncated normal distribution is an important example.[2]
The Tobit model employs truncated distributions.Other examples include truncated binomial at x=0 and truncated poisson at x=0.
Suppose we have the following set up: a truncation value,
t
g(t)
x
f(x|t)=Tr(x)
x
t
First, by definition:
infty | |
f(x)=\int | |
x |
f(x|t)g(t)dt
a | |
F(a)=\int | |
x |
infty | |
\left[\int | |
-infty |
f(x|t)g(t)dt\right]dx.
Notice that
t
x
t
x
f(x)
F(x)
By Bayes' rule,
g(t|x)=
f(x|t)g(t) | |
f(x) |
,
which expands to
g(t|x)=
f(x|t)g(t) | |||||||||
|
.
Suppose we know that t is uniformly distributed from [0,''T''] and x|t is distributed uniformly on [0,''t'']. Let g(t) and f(x|t) be the densities that describe t and x respectively. Suppose we observe a value of x and wish to know the distribution of t given that value of x.
g(t|x)=
f(x|t)g(t) | |
f(x) |
=
1 | |
t(ln(T)-ln(x)) |
forallt>x.