In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable. In two dimensions it is also possible to consider perfect negative dependence, which is called countermonotonicity.
Comonotonicity is also related to the comonotonic additivity of the Choquet integral.
The concept of comonotonicity has applications in financial risk management and actuarial science, see e.g. and . In particular, the sum of the components is the riskiest if the joint probability distribution of the random vector is comonotonic. Furthermore, the -quantile of the sum equals the sum of the -quantiles of its components, hence comonotonic random variables are quantile-additive. In practical risk management terms it means that there is minimal (or eventually no) variance reduction from diversification.
For extensions of comonotonicity, see and .
A subset of is called comonotonic (sometimes also nondecreasing[1]) if, for all and in with for some, it follows that for all .
This means that is a totally ordered set.
Let be a probability measure on the -dimensional Euclidean space and let denote its multivariate cumulative distribution function, that is
F(x1,\ldots,xn):=\mul(\{(y1,\ldots,y
n\mid | |
n)\in{R} |
y1\lex1,\ldots,yn\lexn\}r), (x1,\ldots,x
n. | |
n)\in{R} |
Furthermore, let denote the cumulative distribution functions of the one-dimensional marginal distributions of, that means
Fi(x):=\mul(\{(y1,\ldots,y
n\mid | |
n)\in{R} |
yi\lex\}r), x\in{R}
for every . Then is called comonotonic, if
F(x1,\ldots,xn)=mini\in\{1,\ldots,n\
Note that the probability measure is comonotonic if and only if its support is comonotonic according to the above definition.[2]
An -valued random vector is called comonotonic, if its multivariate distribution (the pushforward measure) is comonotonic, this means
\Pr(X1\lex1,\ldots,Xn\lexn)=mini\in\{1,\ldots,n\
An -valued random vector is comonotonic if and only if it can be represented as
(X1,\ldots,Xn)=d(F
-1 | |
X1 |
-1 | |
(U),\ldots,F | |
Xn |
(U)),
where stands for equality in distribution, on the right-hand side are the left-continuous generalized inverses of the cumulative distribution functions, and is a uniformly distributed random variable on the unit interval. More generally, a random vector is comonotonic if and only if it agrees in distribution with a random vector where all components are non-decreasing functions (or all are non-increasing functions) of the same random variable.
Let be an -valued random vector. Then, for every,
\Pr(X1\lex1,\ldots,Xn\lexn)\le\Pr(Xi\lexi), (x1,\ldots,x
n, | |
n)\in{R} |
hence
\Pr(X1\lex1,\ldots,Xn\lexn)\lemini\in\{1,\ldots,n\
with equality everywhere if and only if is comonotonic.
Let be a bivariate random vector such that the expected values of, and the product exist. Let be a comonotonic bivariate random vector with the same one-dimensional marginal distributions as .[3] Then it follows from Höffding's formula for the covariance and the upper Fréchet–Hoeffding bound that
Cov(X,Y)\leCov(X*,Y*)
and, correspondingly,
\operatornameE[XY]\le\operatornameE[X*Y*]
with equality if and only if is comonotonic.
Note that this result generalizes the rearrangement inequality and Chebyshev's sum inequality.