Estimation statistics

Source: Wikipedia, the free encyclopedia.

Estimation statistics, or simply estimation, is a data analysis framework that uses a combination of

null hypothesis significance testing (NHST), by going beyond the question is an effect present or not, and provides information about how large an effect is.[2][3] Estimation statistics is sometimes referred to as the new statistics.[3][4][5]

The primary aim of estimation methods is to report an

point estimate) along with its confidence interval, the latter of which is related to the precision of the estimate.[6] The confidence interval summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a P value as an unhelpful distraction from the important business of reporting an effect size with its confidence intervals,[7] and believe that estimation should replace significance testing for data analysis.[8][9]

History

Starting in 1929, physicist Raymond Thayer Birge published review papers[10] in which he used weighted-averages methods to calculate estimates of physical constants, a procedure that can be seen as the precursor to modern meta-analysis.[11]

In the 1960s, estimation statistics was adopted by the non-physical sciences with the development of the standardized effect size by Jacob Cohen.

In the 1970s, modern research synthesis was pioneered by Gene V. Glass with the first systematic review and meta-analysis for psychotherapy.[12] This pioneering work subsequently influenced the adoption of meta-analyses for medical treatments more generally.

In the 1980s and 1990s, estimation methods were extended and refined by biostatisticians including

Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, and many others, with the development of the modern (medical) meta-analysis
.

Starting in the 1980s, the systematic review, used in conjunction with meta-analysis, became a technique widely used in medical research. There are over 200,000 citations to "meta-analysis" in PubMed.

In the 1990s, editor Kenneth Rothman banned the use of p-values from the journal Epidemiology; compliance was high among authors but this did not substantially change their analytical thinking.[13]

In the 2010s, Geoff Cumming published a textbook dedicated to estimation statistics, along with software in Excel designed to teach effect-size thinking, primarily to psychologists.[14] Also in the 2010s, estimation methods were increasingly adopted in neuroscience.[15][16]

In 2013, the Publication Manual of the American Psychological Association recommended to use estimation in addition to hypothesis testing.[17] Also in 2013, the Uniform Requirements for Manuscripts Submitted to Biomedical Journals document made a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size."[18]

In 2019, over 800 scientists signed an open comment calling for the entire concept of statistical significance to be abandoned.[19]

In 2019, the Society for Neuroscience journal eNeuro instituted a policy recommending the use of estimation graphics as the preferred method for data presentation.[20] And in 2022, the International Society of Physiotherapy Journal Editors recommended the use of estimation methods instead of null hypothesis statistical tests.[21]

Despite the widespread adoption of meta-analysis for clinical research, and recommendations by several major publishing institutions, the estimation framework is not routinely used in primary biomedical research.[22]

Methodology

Many significance tests have an estimation counterpart;

paired t-test and multiple comparisons. Similarly, for a regression analysis, an analyst would report the coefficient of determination
(R2) and the model equation instead of the model's p-value.

However, proponents of estimation statistics warn against reporting only a few numbers. Rather, it is advised to analyze and present data using data visualization.[2][5][6] Examples of appropriate visualizations include the scatter plot for regression, and Gardner–Altman plots for two independent groups.[24] While historical data-group plots (bar charts, box plots, and violin plots) do not display the comparison, estimation plots add a second axis to explicitly visualize the effect size.[25]

The Gardner–Altman plot. Left: A conventional bar chart, using asterisks to show that the difference is 'statistically significant.' Right: A Gardner–Altman plot that shows all data points, along with the mean difference and its confidence intervals.

Gardner–Altman plot

The Gardner–Altman mean difference plot was first described by Martin Gardner and Doug Altman in 1986;[24] it is a statistical graph designed to display data from two independent groups.[5] There is also a version suitable for paired data. The key instructions to make this chart are as follows: (1) display all observed values for both groups side-by-side; (2) place a second axis on the right, shifted to show the mean difference scale; and (3) plot the mean difference with its confidence interval as a marker with error bars.[3] Gardner-Altman plots can be generated with DABEST-Python, or dabestr; alternatively, the analyst can use GUI software like the Estimation Stats app.

The Cumming plot. A Cumming plot as rendered by the EstimationStats web application. In the top panel, all observed values are shown. The effect sizes, sampling distribution, and 95% confidence intervals are plotted on a separate axes beneath the raw data. For each group, summary measurements (mean ± standard deviation) are drawn as gapped lines.

Cumming plot

For multiple groups, Geoff Cumming introduced the use of a secondary panel to plot two or more mean differences and their confidence intervals, placed below the observed values panel;[3] this arrangement enables easy comparison of mean differences ('deltas') over several data groupings. Cumming plots can be generated with the ESCI package, DABEST, or the Estimation Stats app.

Other methodologies

In addition to the mean difference, there are numerous other

Kolmogorov-Smirnov statistic
.

Flaws in hypothesis testing

In

hypothesis testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of hypothesis testing[3][6]
for the following reasons, among others:

Benefits of estimation statistics

Quantification

While p-values focus on yes/no answers, estimation directs the analyst's attention to quantification.

Advantages of confidence intervals

Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of covering the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller. In addition, 95% confidence intervals are also 83% prediction intervals: one (pre experimental) confidence interval has an 83% chance of covering any future experiment's mean.[3] As such, knowing a single experiment's 95% confidence intervals gives the analyst a reasonable range for the population mean. Nevertheless, confidence distributions and posterior distributions provide a whole lot more information than a single point estimate or intervals,[30] that can exacerbate dichotomous thinking according to the interval covering or not covering a "null" value of interest (i.e. the Inductive behavior of Neyman as opposed to that of Fisher[31]).

Evidence-based statistics

Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values.[32]

Precision planning

The precision of an estimate is formally defined as 1/

power since statistical power itself is conceptually linked to significance testing.[3] Precision planning can be done with the ESCI web app
.

See also

References

  1. ^ Ellis, Paul. "Effect size FAQ".
  2. ^ a b Cohen, Jacob. "The earth is round (p<.05)" (PDF). Archived from the original (PDF) on 2017-10-11. Retrieved 2013-08-22.
  3. ^ ]
  4. ^ Altman, Douglas (1991). Practical Statistics For Medical Research. London: Chapman and Hall.
  5. ^ a b c Douglas Altman, ed. (2000). Statistics with Confidence. London: Wiley-Blackwell.[page needed]
  6. ^ .
  7. ^ Ellis, Paul (2010-05-31). "Why can't I just judge my result by looking at the p value?". Retrieved 5 June 2013.
  8. S2CID 205424566
    .
  9. .
  10. .
  11. .
  12. .
  13. .
  14. ^ Cumming, Geoff. "ESCI (Exploratory Software for Confidence Intervals)". Archived from the original on 2013-12-29. Retrieved 2013-05-12.
  15. PMID 26647168
    .
  16. .
  17. ^ "Publication Manual of the American Psychological Association, Sixth Edition". Archived from the original on 2013-03-05.
  18. ^ "Uniform Requirements for Manuscripts Submitted to Biomedical Journals". Archived from the original on 15 May 2013.
  19. ^ Amrhein, Valentin; Greenland, Sander; McShane, Blake (2019). "Scientists rise up against statistical significance", Nature 567, 305-307.
  20. PMID 31453315
    .
  21. ^ Elkins, Mark; et al. (2022). "Statistical inference through estimation: recommendations from the International Society of Physiotherapy Journal Editors", Journal of Physiotherapy, 68 (1), 1-4.
  22. PMID 31113309
    .
  23. ]
  24. ^ .
  25. doi:10.1101/377978. {{cite journal}}: Cite journal requires |journal= (help
    )
  26. .
  27. ^ Ellis, Paul (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge: Cambridge University Press.[page needed]
  28. ]
  29. ^ Cumming, Geoff. "Dance of the p values". YouTube.
  30. S2CID 3242459
    .
  31. .
  32. .