Truncated distribution explained

In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values which lie above or below a given threshold or within a specified range. For example, if the dates of birth of children in a school are examined, these would typically be subject to truncation relative to those of all children in the area given that the school accepts only children in a given age range on a specific date. There would be no information about how many children in the locality had dates of birth before or after the school's cutoff dates if only a direct approach to the school were used to obtain information.

Where sampling is such as to retain knowledge of items that fall outside the required range, without recording the actual values, this is known as censoring, as opposed to the truncation here.[1]

Definition

The following discussion is in terms of a random variable having a continuous distribution although the same ideas apply to discrete distributions. Similarly, the discussion assumes that truncation is to a semi-open interval y ∈ (a,b] but other possibilities can be handled straightforwardly.

Suppose we have a random variable,

X

that is distributed according to some probability density function,

f(x)

, with cumulative distribution function

F(x)

both of which have infinite support. Suppose we wish to know the probability density of the random variable after restricting the support to be between two constants so that the support,

y=(a,b]

. That is to say, suppose we wish to know how

X

is distributed given

a<X\leqb

.

f(x|a<X\leqb)=

g(x)
F(b)-F(a)

=

f(x)I(\{a<x\leqb\
)}{F(b)-F(a)}

\proptoxf(x)I(\{a<x\leqb\})

where

g(x)=f(x)

for all

a<x\leqb

and

g(x)=0

everywhere else. That is,

g(x)=f(x)I(\{a<x\leqb\})

where

I

is the indicator function. Note that the denominator in the truncated distribution is constant with respect to the

x

.

Notice that in fact

f(x|a<X\leqb)

is a density:
b
\int
a

f(x|a<X\leqb)dx=

1
F(b)-F(a)
b
\int
a

g(x)dx=1

.

Truncated distributions need not have parts removed from the top and bottom. A truncated distribution where just the bottom of the distribution has been removed is as follows:

f(x|X>y)=

g(x)
1-F(y)

where

g(x)=f(x)

for all

y<x

and

g(x)=0

everywhere else, and

F(x)

is the cumulative distribution function.

A truncated distribution where the top of the distribution has been removed is as follows:

f(x|X\leqy)=

g(x)
F(y)

where

g(x)=f(x)

for all

x\leqy

and

g(x)=0

everywhere else, and

F(x)

is the cumulative distribution function.

Expectation of truncated random variable

Suppose we wish to find the expected value of a random variable distributed according to the density

f(x)

and a cumulative distribution of

F(x)

given that the random variable,

X

, is greater than some known value

y

. The expectation of a truncated random variable is thus:

E(X|X>y)=

infty
\intxg(x)dx
y
1-F(y)

where again

g(x)

is

g(x)=f(x)

for all

x>y

and

g(x)=0

everywhere else.

Letting

a

and

b

be the lower and upper limits respectively of support for the original density function

f

(which we assume is continuous), properties of

E(u(X)|X>y)

, where

u

is some continuous function with a continuous derivative, include:

\limyE(u(X)|X>y)=E(u(X))

\limyE(u(X)|X>y)=u(b)

\partial
\partialy

[E(u(X)|X>y)]=

f(y)
1-F(y)

[E(u(X)|X>y)-u(y)]

and

\partial
\partialy

[E(u(X)|X<y)]=

f(y)
F(y)

[-E(u(X)|X<y)+u(y)]

\limy

\partial
\partialy

[E(u(X)|X>y)]=f(a)[E(u(X))-u(a)]

\limy

\partial
\partialy

[E(u(X)|X>y)]=

1
2

u'(b)

Provided that the limits exist, that is:

\limyu'(y)=u'(c)

,

\limyu(y)=u(c)

and

\limyf(y)=f(c)

where

c

represents either

a

or

b

.

Examples

The truncated normal distribution is an important example.[2]

The Tobit model employs truncated distributions.Other examples include truncated binomial at x=0 and truncated poisson at x=0.

Random truncation

Suppose we have the following set up: a truncation value,

t

, is selected at random from a density,

g(t)

, but this value is not observed. Then a value,

x

, is selected at random from the truncated distribution,

f(x|t)=Tr(x)

. Suppose we observe

x

and wish to update our belief about the density of

t

given the observation.

First, by definition:

infty
f(x)=\int
x

f(x|t)g(t)dt

, and
a
F(a)=\int
x
infty
\left[\int
-infty

f(x|t)g(t)dt\right]dx.

Notice that

t

must be greater than

x

, hence when we integrate over

t

, we set a lower bound of

x

. The functions

f(x)

and

F(x)

are the unconditional density and unconditional cumulative distribution function, respectively.

By Bayes' rule,

g(t|x)=

f(x|t)g(t)
f(x)

,

which expands to

g(t|x)=

f(x|t)g(t)
infty
\intf(x|t)g(t)dt
x

.

Two uniform distributions (example)

Suppose we know that t is uniformly distributed from [0,''T''] and x|t is distributed uniformly on [0,''t'']. Let g(t) and f(x|t) be the densities that describe t and x respectively. Suppose we observe a value of x and wish to know the distribution of t given that value of x.

g(t|x)=

f(x|t)g(t)
f(x)

=

1
t(ln(T)-ln(x))

forallt>x.

See also

References

  1. Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms. OUP.
  2. Johnson, N.L., Kotz, S., Balakrishnan, N. (1994) Continuous Univariate Distributions, Volume 1, Wiley. (Section 10.1)