Score test
In
Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the
The main advantage of the score test over the
Single-parameter test
The statistic
Let be the likelihood function which depends on a univariate parameter and let be the data. The score is defined as
The Fisher information is[6]
where ƒ is the probability density.
The statistic to test is
which has an asymptotic distribution of , when is true. While asymptotically identical, calculating the LM statistic using the outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.[7]
Note on notation
Note that some texts use an alternative notation, in which the statistic is tested against a normal distribution. This approach is equivalent and gives identical results.
As most powerful test for small deviations
where is the likelihood function, is the value of the parameter of interest under the null hypothesis, and is a constant set depending on the size of the test desired (i.e. the probability of rejecting if is true; see
The score test is the most powerful test for small deviations from . To see this, consider testing versus . By the Neyman–Pearson lemma, the most powerful test has the form
Taking the log of both sides yields
The score test follows making the substitution (by Taylor series expansion)
and identifying the above with .
Relationship with other hypothesis tests
If the null hypothesis is true, the likelihood ratio test, the Wald test, and the Score test are asymptotically equivalent tests of hypotheses.[8][9] When testing nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.
Multiple parameters
A more general score test can be derived when there is more than one parameter. Suppose that is the
asymptotically under , where is the number of constraints imposed by the null hypothesis and
and
This can be used to test .
The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.[10]
Special cases
In many situations, the score statistic reduces to another commonly used statistic.[11]
In linear regression, the Lagrange multiplier test can be expressed as a function of the F-test.[12]
When the data follows a normal distribution, the score statistic is the same as the
When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the Pearson's chi-squared test.
See also
- Fisher information
- Uniformly most powerful test
- Score (statistics)
- Sup-LM test
References
- .
- JSTOR 2237089.
- JSTOR 2297111.
- ISBN 978-3-642-34332-2.
- ISBN 0-262-11235-3.
- ^ Lehmann and Casella, eq. (2.5.16).
- .
- ISBN 978-0-444-86185-6.
- ISBN 978-1-4614-3899-1.)
{{cite book}}
: CS1 maint: multiple names: authors list (link - ^ Taboga, Marco. "Lectures on Probability Theory and Mathematical Statistics". statlect.com. Retrieved 31 May 2022.
- ISBN 978-1-58488-027-1.
- .
Further reading
- Buse, A. (1982). "The Likelihood Ratio, Wald, and Lagrange Multiplier Tests: An Expository Note". .
- ISBN 0-521-26616-5.
- Ma, Jun; Nelson, Charles R. (2016). "The superiority of the LM test in a class of econometric models where the Wald test performs poorly". Unobserved Components and Time Series Econometrics. Oxford University Press. pp. 310–330. ISBN 978-0-19-968366-6.
- Rao, C. R. (2005). "Score Test: Historical Review and Recent Developments". Advances in Ranking and Selection, Multiple Comparisons, and Reliability. Boston: Birkhäuser. pp. 3–20. ISBN 978-0-8176-3232-8.