Mathematical statistics

Source: Wikipedia, the free encyclopedia.
Illustration of linear regression on a data set. Regression analysis is an important part of mathematical statistics.

Mathematical statistics is the application of

measure theory.[2][3]

Introduction

Statistical data collection is concerned with the planning of studies, especially with the

random sampling. The initial analysis of the data often follows the study protocol specified prior to the study being conducted. The data from a study can also be analyzed to consider secondary hypotheses inspired by the initial results, or to suggest new studies. A secondary analysis of the data from a planned study uses tools from data analysis
, and the process of doing this is mathematical statistics.

Data analysis is divided into:

  • descriptive statistics – the part of statistics that describes data, i.e. summarises the data and their typical properties.
  • inferential statistics – the part of statistics that draws conclusions from data (using some model for the data): For example, inferential statistics involves selecting a model for the data, checking whether the data fulfill the conditions of a particular model, and with quantifying the involved uncertainty (e.g. using confidence intervals
    ).

While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data. For example, from

observational studies, in which case the inference is dependent on the model chosen by the statistician, and so subjective.[4][5]

Topics

The following are some of the important topics in mathematical statistics:[6][7]

Probability distributions

A

continuous time, may demand the use of more general probability measures
.

A probability distribution can either be

random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution
is a commonly encountered multivariate distribution.

Special distributions

Statistical inference

Statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation.[8] Initial requirements of such a system of procedures for inference and induction are that the system should produce reasonable answers when applied to well-defined situations and that it should be general enough to be applied across a range of situations. Inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a larger population that the sample represents.

The outcome of statistical inference may be an answer to the question "what should be done next?", where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy. For the most part, statistical inference makes propositions about populations, using data drawn from the population of interest via some form of random sampling. More generally, data about a random process is obtained from its observed behavior during a finite period of time. Given a parameter or hypothesis about which one wishes to make inference, statistical inference most often uses:

  • a statistical model of the random process that is supposed to generate the data, which is known when randomization has been used, and
  • a particular realization of the random process; i.e., a set of data.

Regression

In

average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution
.

Many techniques for carrying out regression analysis have been developed. Familiar methods, such as linear regression, are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data (e.g. using ordinary least squares). Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.

Nonparametric statistics

Nonparametric statistics are values calculated from data in a way that is not based on parameterized families of probability distributions. They include both descriptive and inferential statistics. The typical parameters are the expectations, variance, etc. Unlike parametric statistics, nonparametric statistics make no assumptions about the probability distributions of the variables being assessed.[9]

Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a

preferences. In terms of levels of measurement
, non-parametric methods result in "ordinal" data.

As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust.

One drawback of non-parametric methods is that since they do not rely on assumptions, they are generally less

powerful than their parametric counterparts.[10] Low power non-parametric tests are problematic because a common use of these methods is for when a sample has a low sample size.[10] Many parametric methods are proven to be the most powerful tests through methods such as the Neyman–Pearson lemma and the Likelihood-ratio test
.

Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding.

Statistics, mathematics, and mathematical statistics

Mathematical statistics is a key subset of the discipline of

Statistical theorists
study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions.

Mathematicians and statisticians like

optimization; for the design of experiments, statisticians use algebra and combinatorics. But while statistical practice often relies on probability and decision theory, their application can be controversial [5]

See also

References

  1. .
  2. .
  3. .
  4. ^ .
  5. ^ Hogg, R. V., A. Craig, and J. W. McKean. "Intro to Mathematical Statistics." (2005).
  6. ^ Larsen, Richard J. and Marx, Morris L. "An Introduction to Mathematical Statistics and Its Applications" (2012). Prentice Hall.
  7. ^ "Research Nonparametric Methods". Carnegie Mellon University. Retrieved August 30, 2022.
  8. ^ a b "Nonparametric Tests". sphweb.bumc.bu.edu. Retrieved 2022-08-31.
  9. ISBN 0-486-43912-7 {{cite book}}: ISBN / Date incompatibility (help
    )
  10. ^ Wald, Abraham (1950). Statistical Decision Functions. John Wiley and Sons, New York.
  11. .
  12. ^ .
  13. ^ Bickel, Peter J.; Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics. Vol. 1 (Second (updated printing 2007) ed.). Pearson Prentice-Hall.
  14. .
  15. ^ Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer.

Further reading