Fieller's theorem

Source: Wikipedia, the free encyclopedia.

In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means.

Approximate confidence interval

Variables a and b may be measured in different units, so there is no way to directly combine the standard errors as they may also be in different units. The most complete discussion of this is given by Fieller (1954).[1]

Fieller showed that if a and b are (possibly

means of two samples with expectations
and , and variances and and covariance , and if are all known, then a (1 − α) confidence interval (mLmU) for is given by

where

Here is an unbiased estimator of based on r degrees of freedom, and is the -level deviate from the Student's t-distribution based on r degrees of freedom.

Three features of this formula are important in this context:

a) The expression inside the square root has to be positive, or else the resulting interval will be imaginary.

b) When g is very close to 1, the confidence interval is infinite.

c) When g is greater than 1, the overall divisor outside the square brackets is negative and the confidence interval is exclusive.

Other methods

One problem is that, when g is not small, the confidence interval can blow up when using Fieller's theorem. Andy Grieve has provided a Bayesian solution where the CIs are still sensible, albeit wide.[2] Bootstrapping provides another alternative that does not require the assumption of normality.[3]

History

Edgar C. Fieller (1907–1960) first started working on this problem while in

Second World War, after which he was appointed the first head of the Statistics Section at the National Physical Laboratory.[4]

See also

Notes

Further reading