Parametric statistics
Parametric statistics is a branch of statistics which leverages models based on a fixed (finite) set of parameters.[1] Conversely nonparametric statistics does not assume explicit (finite-parametric) mathematical forms for distributions when modeling data. However, it may make some assumptions about that distribution, such as continuity or symmetry, or even an explicit mathematical shape but have a model for a distributional parameter that is not itself finite-parametric.
Most well-known statistical methods are parametric.[2] Regarding nonparametric (and semiparametric) models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[3]
Example
The normal family of distributions all have the same general shape and are parameterized by mean and standard deviation. That means that if the mean and standard deviation are known and if the distribution is normal, the probability of any future observation lying in a given range is known.
Suppose that we have a sample of 99 test scores with a mean of 100 and a standard deviation of 1. If we assume all 99 test scores are random observations from a normal distribution, then we predict there is a 1% chance that the 100th test score will be higher than 102.33 (that is, the mean plus 2.33 standard deviations), assuming that the 100th test score comes from the same distribution as the others. Parametric statistical methods are used to compute the 2.33 value above, given 99 independent observations from the same normal distribution.
A
History
Parametric statistics was mentioned by R. A. Fisher in his work Statistical Methods for Research Workers in 1925, which created the foundation for modern statistics.
See also
References
- John Wiley & Sons
- ^ Cox, D. R. (2006), Principles of Statistical Inference, Cambridge University Press
- ^ Cox 2006, p. 2