Observed information explained

In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information.

Definition

Suppose we observe random variables

X1,\ldots,Xn

, independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters

\theta

given the data

X1,\ldots,Xn

is

\ell(\theta|X1,\ldots,Xn)=

n
\sum
i=1

logf(Xi|\theta)

.

We define the observed information matrix at

\theta*

as

l{J}(\theta*)=-\left.\nabla\nabla\top\ell(\theta)

\right|
\theta=\theta*

=- \left. \left(\begin{array}{cccc} \tfrac{\partial2}{\partial

2}
\theta
1

&\tfrac{\partial2}{\partial\theta1\partial\theta2} & &\tfrac{\partial2}{\partial\theta1\partial\thetap}\\ \tfrac{\partial2}{\partial\theta2\partial\theta1} &\tfrac{\partial2}{\partial

2}
\theta
2

& &\tfrac{\partial2}{\partial\theta2\partial\thetap}\\ \vdots& \vdots& \ddots& \vdots\\ \tfrac{\partial2}{\partial\thetap\partial\theta1} &\tfrac{\partial2}{\partial\thetap\partial\theta2} & &\tfrac{\partial2}{\partial

2}
\theta
p

\\ \end{array}\right)

\ell(\theta) \right|
\theta=\theta*

Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction.[1] The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted.

Alternative definition

Andrew Gelman, David Dunson and Donald Rubin[2] define observed information instead in terms of the parameters' posterior probability,

p(\theta|y)

:

I(\theta)=-

d2
d\theta2

logp(\theta|y)

Fisher information

l{I}(\theta)

is the expected value of the observed information given a single observation

X

distributed according to the hypothetical model with parameter

\theta

:

l{I}(\theta)=E(l{J}(\theta))

.

Comparison with the expected information

The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley[3] provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of

O(n-3/2)

is ignored.[4] In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.

However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense.[5] This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix.[6]

See also

Notes and References

  1. Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP.
  2. Book: Bayesian Data Analysis. 3rd. Gelman. Andrew. Carlin. John. Hal. David Dunson . Stern. David. Dunson. Aki. Donald Rubin . Vehtari. Donald. Rubin. 2014. 84 . Andrew Gelman .
  3. Efron . B. . Bradley Efron . Hinkley . D.V. . David V. Hinkley . 1978 . Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher Information . . 65 . 3 . 457 - 487 . 10.1093/biomet/65.3.457 . 0521817 . 2335893.
  4. Lindsay . Bruce G. . Li . Bing . On second-order optimality of the observed Fisher information . The Annals of Statistics . 1 October 1997 . 25 . 5 . 10.1214/aos/1069362393. free .
  5. Book: Yuan . Xiangyu . Spall . James C. . 2020 American Control Conference (ACC) . Confidence Intervals with Expected and Observed Fisher Information in the Scalar Case . July 2020 . 2599–2604 . 10.23919/ACC45564.2020.9147324. 978-1-5386-8266-1 . 220888731 .
  6. Book: Jiang . Sihang . Spall . James C. . 2021 55th Annual Conference on Information Sciences and Systems (CISS) . Comparison between Expected and Observed Fisher Information in Interval Estimation . 24 March 2021 . 1–6 . 10.1109/CISS50987.2021.9400253. 978-1-6654-1268-1 . 233332868 .