Bayes factor

Source: Wikipedia, the free encyclopedia.

The Bayes factor is a ratio of two competing

null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3]

Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.

improper
since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.

Definition

The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.[9]

The posterior probability of a model M given data D is given by Bayes' theorem:

The key data-dependent term represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.

Given a model selection problem in which one wishes to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors and , is assessed by the Bayes factor K given by

When the two models have equal prior probability, so that , the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the

maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[10] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,[11]
with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[12]

Other approaches are:

Interpretation

A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical

hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:[13]

K dHart bits Strength of evidence
< 100 < 0 < 0 Negative (supports M2)
100 to 101/2 0 to 5 0 to 1.6 Barely worth mentioning
101/2 to 101 5 to 10 1.6 to 3.3 Substantial
101 to 103/2 10 to 15 3.3 to 5.0 Strong
103/2 to 102 15 to 20 5.0 to 6.6 Very strong
> 102 > 20 > 6.6 Decisive

The second column gives the corresponding weights of evidence in

decibans); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.[14]

An alternative table, widely cited, is provided by Kass and Raftery (1995):[10]

log10 K K Strength of evidence
0 to 1/2 1 to 3.2 Not worth more than a bare mention
1/2 to 1 3.2 to 10 Substantial
1 to 2 10 to 100 Strong
> 2 > 100 Decisive

Example

Suppose we have a

uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution
:

Thus we have for M1

whereas for M2 we have

The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.

A

at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.

A classical

maximum likelihood
estimate for q, namely , whence

(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.

M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why

On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its Akaike information criterion (AIC) value is . Model M2 has 1 parameter, and so its AIC value is . Hence M1 is about times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.

See also

Statistical ratios

References

Further reading

External links