Matrix of second derivatives of the log-likelihood function
In
.
Definition
Suppose we observe random variables , independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters given the data is
- .
We define the observed information matrix at as
Since the inverse of the information matrix is the
maximum-likelihood estimators
allows the observed information matrix to be evaluated before being inverted.
Alternative definition
Andrew Gelman, David Dunson and Donald Rubin[2] define observed information instead in terms of the parameters' posterior probability, :
Fisher information
The Fisher information is the expected value of the observed information given a single observation distributed according to the hypothetical model with parameter :
- .
Comparison with the expected information
The comparison between the observed information and the expected information remains an active and ongoing area of research and debate.
maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum
mean squared error as an approximation of the true information if an error term of
is ignored.
[4] In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.
However, when the construction of
confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for
confidence-interval constructions of scalar parameters in the
mean squared error sense.
[5] This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix.
[6]
See also
References