In probability theory and statistics, two real-valued random variables,
X
Y
\operatorname{cov}[X,Y]=\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]
Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined.
In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and
X
Y
\operatorname{E}[XY]=0
If
X
Y
Two random variables
X,Y
\operatorname{Cov}[X,Y]=\operatorname{E}[(X-\operatorname{E}[X])(Y-\operatorname{E}[Y])]
Z,W
\operatorname{K}ZW=\operatorname{E}[(Z-\operatorname{E}[Z])\overline{(W-\operatorname{E}[W])}]
\operatorname{J}ZW=\operatorname{E}[(Z-\operatorname{E}[Z])(W-\operatorname{E}[W])]
Z,Wuncorrelated \iff \operatorname{E}[Z\overline{W}]=\operatorname{E}[Z] ⋅ \operatorname{E}[\overline{W}]and\operatorname{E}[ZW]=\operatorname{E}[Z] ⋅ \operatorname{E}[W]
A set of two or more random variables
X1,\ldots,Xn
\operatorname{K}XX
X=(X1,\ldots,X
T | |
n) |
\operatorname{K}XX=\operatorname{cov}[X,X]=\operatorname{E}[(X-\operatorname{E}[X])(X-\operatorname{E}[X])\rm]=\operatorname{E}[XXT]-\operatorname{E}[X]\operatorname{E}[X]T
See main article: Correlation and dependence.
X
Y
X
U
U=XY
U
X
Proof:
Taking into account that
\operatorname{E}[U]=\operatorname{E}[XY]=\operatorname{E}[X]\operatorname{E}[Y]=\operatorname{E}[X] ⋅ 0=0,
X
Y
\begin{align} \operatorname{cov}[U,X]&=\operatorname{E}[(U-\operatornameE[U])(X-\operatornameE[X])]=\operatorname{E}[U(X-\tfrac12)]\\ &=\operatorname{E}[X2Y-\tfrac12XY]=\operatorname{E}[(X2-\tfrac12X)Y]=\operatorname{E}[(X2-\tfrac12X)]\operatornameE[Y]=0 \end{align}
Therefore,
U
X
Independence of
U
X
a
b
\Pr(U=a\midX=b)=\Pr(U=a)
a=1
b=0
\Pr(U=1\midX=0)=\Pr(XY=1\midX=0)=0
\Pr(U=1)=\Pr(XY=1)=1/4
\Pr(U=1\midX=0)\ne\Pr(U=1)
U
X
Q.E.D.
If
X
[-1,1]
Y=X2
X
Y
X
Y
Y
X
fX(t)={1\over2}I[-1,1]; fY(t)={1\over{2\sqrt{t}}}I]0,1]
on the other hand,
fX,Y
0<X<Y<1
fX x fY
fX,Y(X,Y) ≠ fX(X) x fY(Y)
E[X]={{1-1}\over4}=0;E[Y]={{13-(-1)3}\over{3 x 2}}={1\over3}
Cov[X,Y]=E\left[(X-E[X])(Y-E[Y])\right]=E\left[X3-{X\over3}\right]={{14-(-1)4}\over{4 x 2}}=0
Therefore the variables are uncorrelated.
There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution).[3] Further, two jointly normally distributed random variables are independent if they are uncorrelated,[4] although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent).
Two random vectors
X=(X1,\ldots,X
T | |
m) |
Y=(Y1,\ldots,Y
T | |
n) |
\operatorname{E}[XYT]=\operatorname{E}[X]\operatorname{E}[Y]T
\operatorname{K}XY
Two complex random vectors
Z
W
\operatorname{K}ZW=\operatorname{J}ZW=0
\operatorname{K}ZW=\operatorname{E}[(Z-\operatorname{E}[Z]){(W-\operatorname{E}[W])}H]
\operatorname{J}ZW=\operatorname{E}[(Z-\operatorname{E}[Z]){(W-\operatorname{E}[W])}T]
Two stochastic processes
\left\{Xt\right\}
\left\{Yt\right\}
\operatorname{K}XY(t1,t2)=\operatorname{E}\left[\left(X(t1)-\muX(t1)\right)\left(Y(t2)-\muY(t2)\right)\right]
\left\{Xt\right\},\left\{Yt\right\}uncorrelated :\iff \forallt1,t2\colon\operatorname{K}XY(t1,t2)=0