Computational statistics

Source: Wikipedia, the free encyclopedia.
Students working in the Statistics Machine Room of the London School of Economics in 1964

Computational statistics, or statistical computing, is the bond between

statistical education.[1]

As in

statistical methods, such as cases with very large sample size and non-homogeneous data sets.[2]

The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics", and 'computational statistics' as "aiming at the design of algorithm for implementing statistical methods on computers, including the ones unthinkable before the computer age (e.g.

simulation), as well as to cope with analytically intractable problems" [sic].[3]

The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including

.

History

Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology.[4]

In statistical field, the first use of the term “computer” comes in an article in the Journal of the American Statistical Association archives by

electromechanical machine designed to assist in summarizing information stored on punched cards. It was invented by Herman Hollerith (February 29, 1860 – November 17, 1929), an American businessman, inventor, and statistician. His invention of the punched card tabulating machine was patented in 1884, and later was used in the 1890 Census of the United States. The advantages of the technology were immediately apparent. the 1880 Census, with about 50 million people, and it took over 7 years to tabulate. While in the 1890 Census, with over 62 million people, it took less than a year. This marks the beginning of the era of mechanized computational statistics and semiautomatic data processing
systems.

In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution.[5] With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise.[6][7]

Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo.[8] One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards.

By the mid-1950s, several articles and patents for devices had been proposed for random number generators.[9] The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions.[10] This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible.[11]

Methods

Maximum likelihood estimation

Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model.

Monte Carlo method

optimization, numerical integration, and generating draws from a probability distribution
.

Markov chain Monte Carlo

The

probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance
. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution.


Bootstrapping

The bootstrap is a resampling technique used to generate samples from an empirical probability distribution defined by an original sample of the population. It can be used to find a bootstrapped estimator of a population parameter. It can also be used to estimate the standard error of an estimator as well as to generate bootstrapped confidence intervals. The jackknife is a related technique[12].

Applications

Computational statistics journals

Associations

See also

References

Further reading

Articles

Books

External links

Associations

Journals