In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance σ2 of any bounded probability distribution on the real line.
Let m and M be the lower and upper bounds, respectively, for a set of real numbers a1, ..., an, with a particular probability distribution. Let μ be the expected value of this distribution.
Then the Bhatia–Davis inequality states:
\sigma2\le(M-\mu)(\mu-m).
Equality holds if and only if every aj in the set of values is equal either to M or to m.[1]
Since
m\leqA\leqM
0\leqE[(M-A)(A-m)]=-E[A2]-mM+(m+M)\mu
Thus,
\sigma2=E[A2]-\mu2\leq-mM+(m+M)\mu-\mu2=(M-\mu)(\mu-m)
If
\Phi
l{A}
l{B}
l{A}
\leq
\leq
\Phi(A2)-(\PhiA)2\leq(M-\PhiA)(\PhiA-m)
If
X
P(X=xi)=pi,
i=1,...,n
n | |
s | |
1 |
pix
n | |
1 |
pix
n | |
1 |
pixi)(\sum
n | |
1 |
pixi-m)
where
0\leqpi\leq1
n | |
\sum | |
1 |
pi=1
The Bhatia–Davis inequality is stronger than Popoviciu's inequality on variances (note, however, that Popoviciu's inequality does not require knowledge of the expectation or mean), as can be seen from the conditions for equality. Equality holds in Popoviciu's inequality if and only if half of the aj are equal to the upper bounds and half of the aj are equal to the lower bounds. Additionally, Sharma[2] has made further refinements on the Bhatia–Davis inequality.