Conditional probability distribution explained
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables
and
, the
conditional probability distribution of
given
is the
probability distribution of
when
is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value
of
as a parameter. When both
and
are
categorical variables, a
conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the
marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.
If the conditional distribution of
given
is a continuous distribution, then its
probability density function is known as the
conditional density function.
[1] The properties of a conditional distribution, such as the
moments, are often referred to by corresponding names such as the
conditional mean and
conditional variance.
More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditional joint distribution of the included variables.
Conditional discrete distributions
For discrete random variables, the conditional probability mass function of
given
can be written according to its definition as:
Due to the occurrence of
in the denominator, this is defined only for non-zero (hence strictly positive)
The relation with the probability distribution of
given
is:
P(Y=y\midX=x)P(X=x)=P(\{X=x\}\cap\{Y=y\})=P(X=x\midY=y)P(Y=y).
Example
Consider the roll of a fair and let
if the number is even (i.e., 2, 4, or 6) and
otherwise. Furthermore, let
if the number is prime (i.e., 2, 3, or 5) and
otherwise.
D | 1 | 2 | 3 | 4 | 5 | 6 |
---|
X | 0 | 1 | 0 | 1 | 0 | 1 |
Y | 0 | 1 | 1 | 0 | 1 | 0 | |
Then the unconditional probability that
is 3/6 = 1/2 (since there are six possible rolls of the dice, of which three are even), whereas the probability that
conditional on
is 1/3 (since there are three possible prime number rolls—2, 3, and 5—of which one is even).
Conditional continuous distributions
Similarly for continuous random variables, the conditional probability density function of
given the occurrence of the value
of
can be written as
[2] where
gives the
joint density of
and
, while
gives the
marginal density for
. Also in this case it is necessary that
.
The relation with the probability distribution of
given
is given by:
fY\mid(y\midx)fX(x)=fX,Y(x,y)=fX|Y(x\midy)fY(y).
The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem: Borel's paradox shows that conditional probability density functions need not be invariant under coordinate transformations.
Example
The graph shows a bivariate normal joint density for random variables
and
. To see the distribution of
conditional on
, one can first visualize the line
in the
plane, and then visualize the plane containing that line and perpendicular to the
plane. The intersection of that plane with the joint normal density, once rescaled to give unit area under the intersection, is the relevant conditional density of
.
Relation to independence
Random variables
,
are
independent if and only if the conditional distribution of
given
is, for all possible realizations of
, equal to the unconditional distribution of
. For discrete random variables this means
for all possible
and
with
. For continuous random variables
and
, having a joint density function, it means
for all possible
and
with
.
Properties
Seen as a function of
for given
,
is a probability mass function and so the sum over all
(or integral if it is a conditional probability density) is 1. Seen as a function of
for given
, it is a
likelihood function, so that the sum (or integral) over all
need not be 1.
Additionally, a marginal of a joint distribution can be expressed as the expectation of the corresponding conditional distribution. For instance,
.
Measure-theoretic formulation
Let
be a probability space,
a
-field in
. Given
, the
Radon-Nikodym theorem implies that there is
[3] a
-measurable random variable
, called the
conditional probability, such that
for every
, and such a random variable is uniquely defined up to sets of probability zero. A conditional probability is called
regular if
\operatorname{P}( ⋅ \midl{G})(\omega)
is a
probability measure on
for all
a.e.
Special cases:
- For the trivial sigma algebra
, the conditional probability is the constant function
\operatorname{P}\left(A\mid\{\emptyset,\Omega\}\right)=\operatorname{P}(A).
, then
\operatorname{P}(A\midl{G})=1A
, the indicator function (defined below).Let
be a
-valued random variable. For each
, define
For any
, the function
}(\cdot \, | \mathcal) (\omega) : \mathcal \to \mathbb is called the
conditional probability distribution of
given
. If it is a probability measure on
, then it is called
regular.
For a real-valued random variable (with respect to the Borel
-field
on
), every conditional probability distribution is regular.
[4] In this case,
E[X\midl{G}]=
x\mu(dx, ⋅ )
almost surely.
Relation to conditional expectation
For any event
, define the
indicator function:
1A(\omega)=\begin{cases}1 &if\omega\inA,\ 0 &if\omega\notinA,\end{cases}
which is a random variable. Note that the expectation of this random variable is equal to the probability of A itself:
\operatorname{E}(1A)=\operatorname{P}(A).
Given a
-field
, the conditional probability
\operatorname{P}(A\midl{G})
is a version of the
conditional expectation of the indicator function for
:
\operatorname{P}(A\midl{G})=\operatorname{E}(1A\midl{G})
An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation.
Interpretation of conditioning on a Sigma Field
Consider the probability space
and a sub-sigma field
.The sub-sigma field
can be loosely interpreted as containing a subset of the information in
. For example, we might think of
as the probability of the event
given the information in
.
Also recall that an event
is independent of a sub-sigma field
if
for all
. It is incorrect to conclude in general that the information in
does not tell us anything about the probability of event
occurring. This can be shown with a counter-example:
Consider a probability space on the unit interval,
. Let
be the sigma-field of all countable sets and sets whose complement is countable. So each set in
has measure
or
and so is independent of each event in
. However, notice that
also contains all the singleton events in
(those sets which contain only a single
). So knowing which of the events in
occurred is equivalent to knowing exactly which
occurred! So in one sense,
contains no information about
(it is independent of it), and in another sense it contains all the information in
.
[5] See also
References
Sources
- Book: Billingsley
, Patrick . Patrick Billingsley
. Patrick Billingsley . Probability and Measure . 3rd . John Wiley and Sons . New York, NY . 1995 . billingsley95 .
Notes and References
- Book: Ross, Sheldon M. . Sheldon M. Ross
. Sheldon M. Ross . Introduction to Probability Models . San Diego . Academic Press . Fifth . 1993 . 0-12-598455-3 . 88–91 .
- Book: Park, Kun Il. Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer . 2018 . 978-3-319-68074-3.
- [#billingsley95|Billingsley (1995)]
- [#billingsley95|Billingsley (1995)]
- Book: Billingsley, Patrick . Probability and Measure . 2012-02-28 . Wiley . 978-1-118-12237-2 . Hoboken, New Jersey . English.