Given two random variables that are defined on the same probability space,[1] the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered for any given number of random variables. The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables and the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s).
In the formal mathematical setup of measure theory, the joint distribution is given by the pushforward measure, by the map obtained by pairing together the given random variables, of the sample space's probability measure.
In the case of real-valued random variables, the joint distribution, as a particular multivariate distribution, may be expressed by a multivariate cumulative distribution function, or by a multivariate probability density function together with a multivariate probability mass function. In the special case of continuous random variables, it is sufficient to consider probability density functions, and in the case of discrete random variables, it is sufficient to consider probability mass functions.
Each of two urns contains twice as many red balls as blue balls, and no others, and one ball is randomly selected from each urn, with the two draws independent of each other. Let
A
B
A=Red | A=Blue | P(B) | ||
---|---|---|---|---|
B=Red | (2/3)(2/3)=4/9 | (1/3)(2/3)=2/9 | 4/9+2/9=2/3 | |
B=Blue | (2/3)(1/3)=2/9 | (1/3)(1/3)=1/9 | 2/9+1/9=1/3 | |
P(A) | 4/9+2/9=2/3 | 2/9+1/9=1/3 |
Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. In any one cell the probability of a particular combination occurring is (since the draws are independent) the product of the probability of the specified result for A and the probability of the specified result for B. The probabilities in these four cells sum to 1, as with all probability distributions.
Moreover, the final row and the final column give the marginal probability distribution for A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as 2/3. Thus the marginal probability distribution for
A
A
B
Consider the flip of two fair coins; let
A
B
P(A)=1/2 for A\in\{0,1\};
P(B)=1/2 for B\in\{0,1\}.
The joint probability mass function of
A
B
(A=0,B=0), (A=0,B=1), (A=1,B=0), (A=1,B=1).
P(A,B)=1/4 for A,B\in\{0,1\}.
Since the coin flips are independent, the joint probability mass function is the productof the marginals:
P(A,B)=P(A)P(B) for A,B\in\{0,1\}.
Consider the roll of a fair die and let
A=1
A=0
B=1
B=0
1 | 2 | 3 | 4 | 5 | 6 | ||
---|---|---|---|---|---|---|---|
A | 0 | 1 | 0 | 1 | 0 | 1 | |
B | 0 | 1 | 1 | 0 | 1 | 0 |
Then, the joint distribution of
A
B
P(A=0,B=0)=P\{1\}= | 1 |
6 |
, P(A=1,B=0)=P\{4,6\}=
2 | |
6 |
,
P(A=0,B=1)=P\{3,5\}= | 2 |
6 |
, P(A=1,B=1)=P\{2\}=
1 | |
6 |
.
These probabilities necessarily sum to 1, since the probability of some combination of
A
B
See main article: Marginal distribution. If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables.
If the joint probability density function of random variable X and Y is
fX,Y(x,y)
fX(x)=\intfX,Y(x,y) dy
fY(y)=\intfX,Y(x,y) dx
where the first integral is over all points in the range of (X,Y) for which X=x and the second integral is over all points in the range of (X,Y) for which Y=y.[2]
For a pair of random variables
X,Y
FX,Y
where the right-hand side represents the probability that the random variable
X
x
Y
y
For
N
X1,\ldots,XN
F | |
X1,\ldots,XN |
Interpreting the
N
X=(X1,\ldots,X
T | |
N) |
FX(x)=\operatorname{P}(X1\leqx1,\ldots,XN\leqxN)
The joint probability mass function of two discrete random variables
X,Y
or written in terms of conditional distributions
pX,Y(x,y)=P(Y=y\midX=x) ⋅ P(X=x)=P(X=x\midY=y) ⋅ P(Y=y)
P(Y=y\midX=x)
Y=y
X=x
The generalization of the preceding two-variable case is the joint probability distribution of
n
X1,X2,...,Xn
or equivalently
\begin{align} p | |
X1,\ldots,Xn |
(x1,\ldots,xn)&=P(X1=x1) ⋅ P(X2=x2\midX1=x1)\ & ⋅ P(X3=x3\midX1=x1,X2=x2)\ &...\ & ⋅ P(Xn=xn\midX1=x1,X2=x2,...,Xn-1=xn-1). \end{align}
This identity is known as the chain rule of probability.
Since these are probabilities, in the two-variable case
\sumi\sumjP(X=xi and Y=yj)=1,
n
X1,X2,...,Xn
\sumi\sumj...\sumkP(X1=x1i,X2=x2j,...,Xn=xnk)=1.
The joint probability density function
fX,Y(x,y)
This is equal to:
fX,Y(x,y)=fY\mid(y\midx)fX(x)=fX\mid(x\midy)fY(y)
where
fY\mid(y\midx)
fX\mid(x\midy)
Y
X=x
X
Y=y
fX(x)
fY(y)
X
Y
The definition extends naturally to more than two random variables:
Again, since these are probability distributions, one has
\intx\intyfX,Y(x,y) dy dx=1
\int | |
x1 |
\ldots
\int | |
xn |
f | |
X1,\ldots,Xn |
(x1,\ldots,xn) dxn\ldots dx1=1
The "mixed joint density" may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each type
\begin{align} fX,Y(x,y)=fX(x\midy)P(Y=y)=P(Y=y\midX=x)fX(x). \end{align}
X
(X,Y)
fX,Y(x,y)
(X,Y)
X
Y
\begin{align} FX,Y(x,y)&=\sum\limitst\le
x | |
\int | |
s=-infty |
fX,Y(s,t) ds. \end{align}
In general two random variables
X
Y
FX,Y(x,y)=FX(x) ⋅ FY(y)
Two discrete random variables
X
Y
P(X=x and Y=y)=P(X=x) ⋅ P(Y=y)
x
y
While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law.
Similarly, two absolutely continuous random variables are independent if and only if
fX,Y(x,y)=fX(x) ⋅ fY(y)
x
y
If a subset
A
X1, … ,Xn
B
P(X1,\ldots,Xn)
P(X1,\ldots,Xn)
P(B) ⋅ P(A\midB)
P(B)
P(A\midB)
When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables. If the relationship between the random variables is nonlinear, the covariance might not be sensitive to the relationship, which means, it does not relate the correlation between two variables.
The covariance between the random variable X and Y, denoted as cov(X,Y), is :
\sigmaXY=E[(X-\mux)(Y-\muy)]=E(XY)-\mux\muy
There is another measure of the relationship between two random variables that is often easier to interpret than the covariance.
The correlation just scales the covariance by the product of the standard deviation of each variable. Consequently, the correlation is a dimensionless quantity that can be used to compare the linear relationships between pairs of variables in different units. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive (or negative) slope, ρXY is near +1 (or −1). If ρXY equals +1 or −1, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables.
The correlation between random variable X and Y, denoted as
\rhoXY=
cov(X,Y) | |
\sqrt{V(X)V(Y) |
Named joint distributions that arise frequently in statistics include the multivariate normal distribution, the multivariate stable distribution, the multinomial distribution, the negative multinomial distribution, the multivariate hypergeometric distribution, and the elliptical distribution.