In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.
Depending on the context, the conditional expectation can be either a random variable or a function. The random variable is denoted
E(X\midY)
E(X\midY=y)
f(y)
E(X\midY)=f(Y)
Consider the roll of a fair and let A = 1 if the number is even (i.e., 2, 4, or 6) and A = 0 otherwise. Furthermore, let B = 1 if the number is prime (i.e., 2, 3, or 5) and B = 0 otherwise.
1 | 2 | 3 | 4 | 5 | 6 | ||
---|---|---|---|---|---|---|---|
A | 0 | 1 | 0 | 1 | 0 | 1 | |
B | 0 | 1 | 1 | 0 | 1 | 0 |
The unconditional expectation of A is
E[A]=(0+1+0+1+0+1)/6=1/2
E[A\midB=1]=(1+0+0)/3=1/3
E[A\midB=0]=(0+1+1)/3=2/3
E[B\midA=1]=(1+0+0)/3=1/3
E[B\midA=0]=(0+1+1)/3=2/3
Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten–year (3652-day) period from January 1, 1990, to December 31, 1999. The unconditional expectation of rainfall for an unspecified day is the average of the rainfall amounts for those 3652 days. The conditional expectation of rainfall for an otherwise unspecified day known to be (conditional on being) in the month of March, is the average of daily rainfall over all 310 days of the ten–year period that falls in March. And the conditional expectation of rainfall conditional on days dated March 2 is the average of the rainfall amounts that occurred on the ten days with that specific date.
The related concept of conditional probability dates back at least to Laplace, who calculated conditional distributions. It was Andrey Kolmogorov who, in 1933, formalized it using the Radon–Nikodym theorem. In works of Paul Halmos and Joseph L. Doob from 1953, conditional expectation was generalized to its modern definition using sub-σ-algebras.[1]
If is an event in
l{F}
\begin{aligned} \operatorname{E}(X\midA)&=\sumxxP(X=x\midA)\\ &=\sumxx
P(\{X=x\ | |
\cap |
A)}{P(A)} \end{aligned}
If
P(A)=0
If and are discrete random variables,the conditional expectation of given is
\begin{aligned} \operatorname{E}(X\midY=y)&=\sumxxP(X=x\midY=y)\\ &=\sumxx
P(X=x,Y=y) | |
P(Y=y) |
\end{aligned}
P(X=x,Y=y)
Remark that as above the expression is undefined if
P(Y=y)=0
Conditioning on a discrete random variable is the same as conditioning on the corresponding event:
\operatorname{E}(X\midY=y)=\operatorname{E}(X\midA)
\{Y=y\}
Let
X
Y
fX,Y(x,y),
Y
fY(y),
stylefX|Y(x|y)=
fX,Y(x,y) | |
fY(y) |
X
Y=y.
X
Y=y
\begin{aligned} \operatorname{E}(X\midY=y)&=
infty | |
\int | |
-infty |
xfX|Y(x\midy)dx\\ &=
1 | |
fY(y) |
infty | |
\int | |
-infty |
xfX,Y(x,y)dx. \end{aligned}
Conditioning on a continuous random variable is not the same as conditioning on the event
\{Y=y\}
All random variables in this section are assumed to be in
L2
L2
L2
(\Omega,l{F},P)
X:\Omega\toR
L2
\muX
2 | |
\sigma | |
X |
\muX
minx\operatorname{E}\left((X-x)2\right)=\operatorname{E}\left((X-
2\right) = | |
\mu | |
X) |
2 | |
\sigma | |
X |
The conditional expectation of is defined analogously, except instead of a single number
\muX
eX(y)
Y:\Omega\toRn
eX:Rn\toR
ming\operatorname{E}\left((X-g(Y))2\right)=\operatorname{E}\left((X-
2\right) | |
e | |
X(Y)) |
Note that unlike
\muX
eX
Example 1: Consider the case where is the constant random variable that's always 1.Then the mean squared error is minimized by any function of the form
eX(y)=\begin{cases} \muX&ify=1\\ anynumber&otherwise \end{cases}
Example 2: Consider the case where is the 2-dimensional random vector
(X,2X)
\operatorname{E}(X\midY)=X
eX(y1,y2)=3y1-y2
e'X(y1,y2)=y2-y1
Conditional expectation is unique up to a set of measure zero in
Rn
In the first example, the pushforward measure is a Dirac distribution at 1. In the second it is concentrated on the "diagonal"
\{y:y2=2y1\}
The existence of a minimizer for
ming\operatorname{E}\left((X-g(Y))2\right)
M:=\{g(Y):gismeasurableand\operatorname{E}(g(Y)2)<infty\}=L2(\Omega,\sigma(Y))
L2(\Omega)
eX
f(Y)
\langleX-eX(Y),f(Y)\rangle=0
X-eX(Y)
f(Y)=1Y
L2
The conditional expectation is often approximated in applied mathematics and statistics due to the difficulties in analytically calculating it, and for interpolation.[4]
The Hilbert subspace
M=\{g(Y):\operatorname{E}(g(Y)2)<infty\}
These generalizations of conditional expectation come at the cost of many of its properties no longer holding.For example, let be the space of all linear functions of and let
l{E}M
L2
M
\operatorname{E}(l{E}M(X))=\operatorname{E}(X)
An important special case is when and are jointly normally distributed. In this caseit can be shown that the conditional expectation is equivalent to linear regression:
eX(Y)=\alpha0+\sumi\alphaiYi
\{\alphai\}i
Consider the following:
(\Omega,l{F},P)
X\colon\Omega\toRn
l{H}\subseteql{F}
l{F}
Since
l{H}
\sigma
l{F}
X\colon\Omega\toRn
l{H}
H\inl{H}
P|l{H}
P
l{H}
(\Omega,l{H},P|l{H})
A conditional expectation of X given
l{H}
\operatorname{E}(X\midl{H})
l{H}
\Omega\toRn
\intH\operatorname{E}(X\midl{H})dP=\intHXdP
for each
H\inl{H}
As noted in the
L2
X-\operatorname{E}(X\midl{H})
1H
\langleX-\operatorname{E}(X\midl{H}),1H\rangle=0
The existence of
\operatorname{E}(X\midl{H})
F\inl{F}
(\Omega,l{F})
P
h
l{H}
l{F}
\muX\circh=
X| | |
\mu | |
l{H} |
\muX
l{H}
P\circh=P|l{H}
P
l{H}
\muX\circh
P\circh
P\circh(H)=0\iffP(h(H))=0
\muX(h(H))=0\iff\muX\circh(H)=0.
Thus, we have
\operatorname{E}(X\midl{H})=
| |||||||
Consider, in addition to the above,
(U,\Sigma)
Y\colon\Omega\toU
The conditional expectation of given is defined by applying the above construction on the σ-algebra generated by :
\operatorname{E}[X\midY]:=\operatorname{E}[X\mid\sigma(Y)]
By the Doob-Dynkin lemma, there exists a function
eX\colonU\toRn
\operatorname{E}[X\midY]=eX(Y)
\operatorname{E}(X\midl{H})
\operatorname{E}(X\midH)
H
l{H}
\Omega\toRn
Rn
\operatorname{E}(X\midH) P(H)=\intHXdP=\intH\operatorname{E}(X\midl{H})dP
H\inl{H}
l{H}
E(X\midl{H})
l{H}
See main article: Regular conditional probability.
For a Borel subset in
l{B}(Rn)
\kappal{H}(\omega,B):=\operatorname{E}(1X|l{H})(\omega)
\omega
\kappal{H}(\omega,-)
The Law of the unconscious statistician is then
\operatorname{E}[f(X)|l{H}]=\intf(x)\kappal{H}(-,dx)
In full generality, consider:
(\Omega,l{A},P)
(E,\| ⋅ \|E)
X:\Omega\toE
l{H}\subseteql{A}
The conditional expectation of
X
l{H}
P
E
l{H}
\operatorname{E}(X\midl{H})
\intH\operatorname{E}(X\midl{H})dP=\intHXdP
H\inl{H}
In this setting the conditional expectation is sometimes also denoted in operator notation as
\operatorname{E}l{H}X
All the following formulas are to be understood in an almost sure sense. The σ-algebra
l{H}
Z
l{H}=\sigma(Z)
X
l{H}
E(X\midl{H})=E(X)
B\inl{H}
X
1B
\intBXdP=E(X1B)=E(X)E(1B)=E(X)P(B)=\intBE(X)dP.
E(X)
\square
X
\sigma(Y,l{H})
E(XY\midl{H})=E(X)E(Y\midl{H})
X
l{H}
Y
X,Y
l{G},l{H}
X
l{H}
Y
l{G}
E(E(XY\midl{G})\midl{H})=E(X)E(Y)=E(E(XY\midl{H})\midl{G})
X
l{H}
E(X\midl{H})=X
H\inl{H}
\intHE(X|l{H})dP=\intHXdP
\intH(E(X|l{H})-X)dP=0
H\inl{H}
E(X|l{H})
X
l{H}
\intH|E(X|l{H})-X|dP=0
E(X|l{H})=X
\square
l{H}1\subsetl{H}2\subsetl{F}
E(E(X\midl{H}1)\midl{H}2)=E(X\midl{H}1)
\operatorname{E}(f(Z)\midZ)=f(Z)
\operatorname{E}(Z\midZ)=Z
X
l{H}
E(XY\midl{H})=XE(Y\midl{H})
All random variables here are assumed without loss of generality to be non-negative. The general case can be treated with
X=X+-X-
Fix
A\inl{H}
X=1A
H\inl{H}
\intHE(1AY|l{H})dP=\intH1AYdP=\intAYdP=\intA\capE(Y|l{H})dP=\intH1AE(Y|l{H})dP
E(1AY|l{H})=1AE(Y|l{H})
Any simple function is a finite linear combination of indicator functions. By linearity the above property holds for simple functions: if
Xn
E(XnY|l{H})=XnE(Y|l{H})
Now let
X
l{H}
\{Xn\}n\geq
Xn\leqXn+1
X
Y\geq0
\{XnY\}n\geq
XY
Also, since
E(Y|l{H})\geq0
\{XnE(Y|l{H})\}n\geq
XE(Y|l{H})
Combining the special case proved for simple functions, the definition of conditional expectation, and deploying the monotone convergence theorem:
\intHXE(Y|l{H})dP = \intH\limnXnE(Y|l{H})dP = \limn\intHXnE(Y|l{H})dP = \limn\intHE(XnY|l{H})dP = \limn\intHXnYdP=\intH\limn\toXnYdP=\intHXYdP=\intHE(XY|l{H})dP
This holds for all
H\inl{H}
XE(Y|l{H})=E(XY|l{H})
\square
\operatorname{E}(f(Z)Y\midZ)=f(Z)\operatorname{E}(Y\midZ)
E(E(X\midl{H}))=E(X)
l{H}1\subsetl{H}2\subsetl{F}
E(E(X\midl{H}2)\midl{H}1)=E(X\midl{H}1)
l{H}1=\{\emptyset,\Omega\}
E(E(X\midl{H}2))=E(X)
l{H}
\sigma(Z)\subsetl{H}
E(E(X\midl{H})\midZ)=E(X\midZ)
Z=E(X\midl{H})
l{H}
\operatorname{E}(Z\midZ)=Z
E(X\midE(X\midl{H}))=E(X\midl{H})
X,Y
E(E(X\midY)\midf(Y))=E(X\midf(Y))
X,Y,Z
E(E(X\midY,Z)\midY)=E(X\midY)
E(X1+X2\midl{H})=E(X1\midl{H})+E(X2\midl{H})
E(aX\midl{H})=aE(X\midl{H})
a\in\R
X\ge0
E(X\midl{H})\ge0
X1\leX2
E(X1\midl{H})\leE(X2\midl{H})
0\leqXn\uparrowX
E(Xn\midl{H})\uparrowE(X\midl{H})
Xn\toX
|Xn|\leY
Y\inL1
E(Xn\midl{H})\toE(X\midl{H})
styleE(infnXn\midl{H})>-infty
styleE(\liminfn\toinftyXn\midl{H})\le\liminfn\toinftyE(Xn\midl{H})
f\colonR → R
f(E(X\midl{H}))\leE(f(X)\midl{H})
\operatorname{Var}(X\midl{H})=\operatorname{E}l((X-\operatorname{E}(X\midl{H}))2\midl{H}r)
\operatorname{Var}(X\midl{H})=\operatorname{E}(X2\midl{H})-l(\operatorname{E}(X\midl{H})r)2
\operatorname{Var}(X)=\operatorname{E}(\operatorname{Var}(X\midl{H}))+\operatorname{Var}(\operatorname{E}(X\midl{H}))
X
E(X\midl{H}n)\toE(X\midl{H})
l{H}1\subsetl{H}2\subset...b
stylel{H}=
infty | |
\sigma(cup | |
n=1 |
l{H}n)
l{H}1\supsetl{H}2\supset...b
stylel{H}=
infty | |
cap | |
n=1 |
l{H}n
L2
X,Y
l{H}
Y
E(Y(X-E(X\midl{H})))=0
E(X\midl{H})
X
l{H}
X\mapsto\operatorname{E}(X\midl{H})
\operatornameE(X\operatornameE(Y\midl{H}))=\operatornameE\left(\operatornameE(X\midl{H})\operatornameE(Y\midl{H})\right)=\operatornameE(\operatornameE(X\midl{H})Y)
Lp(\Omega,l{F},P) → Lp(\Omega,l{H},P)
\operatorname{E}(|\operatorname{E}(X\midl{H})|p)\le\operatorname{E}(|X|p)
X,Y
Z
P(X\inB\midY,Z)=P(X\inB\midZ)
E(1\{X