Imprecise probability

Source: Wikipedia, the free encyclopedia.

Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:

  • People have a limited ability to determine their own subjective probabilities and might find that they can only provide an interval.
  • As an interval is compatible with a range of opinions, the analysis ought to be more convincing to a range of different people.

Introduction

Uncertainty is traditionally modelled by a

Ramsey, Cox, Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism is Boole's critique[3] of Laplace
's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual.

Perhaps the most common generalization is to replace a single probability specification with an interval specification. Lower and upper probabilities, denoted by and , or more generally, lower and upper expectations (previsions),[4][5][6][7] aim to fill this gap. A lower probability function is superadditive but not necessarily additive, whereas an upper probability is subadditive. To get a general understanding of the theory, consider:

  • the special case with for all events is equivalent to a precise probability
  • and for all non-trivial events represents no constraint at all on the specification of

We then have a flexible continuum of more or less precise models in between.

Some approaches, summarized under the name nonadditive probabilities,[8] directly use one of these set functions, assuming the other one to be naturally defined such that , with the complement of . Other related concepts understand the corresponding intervals for all events as the basic entity.[9][10]

History

The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, by George Boole,[3] who aimed to reconcile the theories of logic and probability. In the 1920s, in A Treatise on Probability, Keynes[11] formulated and applied an explicit interval estimate approach to probability. Work on imprecise probability models proceeded fitfully throughout the 20th century, with important contributions by

Henry Kyburg, Isaac Levi, and Teddy Seidenfeld.[12]
At the start of the 1990s, the field started to gather some momentum, with the publication of Peter Walley's book Statistical Reasoning with Imprecise Probabilities[7] (which is also where the term "imprecise probability" originates). The 1990s also saw important works by Kuznetsov,[13] and by Weichselberger,[9][10] who both use the term interval probability. Walley's theory extends the traditional subjective probability theory via buying and selling prices for gambles, whereas Weichselberger's approach generalizes Kolmogorov's axioms without imposing an interpretation.

Standard consistency conditions relate upper and lower probability assignments to non-empty closed convex sets of probability distributions. Therefore, as a welcome by-product, the theory also provides a formal framework for models used in

non-parametric statistics.[15] Included are also concepts based on Choquet integration,[16] and so-called two-monotone and totally monotone capacities,[17] which have become very popular in artificial intelligence under the name (Dempster–Shafer) belief functions.[18][19] Moreover, there is a strong connection[20] to Shafer and Vovk's notion of game-theoretic probability.[21]

Mathematical models

The term "imprecise probability" is somewhat misleading in that precision is often mistaken for accuracy, whereas an imprecise representation may be more accurate than a spuriously precise representation. In any case, the term appears to have become established in the 1990s, and covers a wide range of extensions of the theory of probability, including:

Interpretation of imprecise probabilities

A unification of many of the above-mentioned imprecise probability theories was proposed by Walley,

fair price
for the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.

The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precise and imprecise probability theories. Such gaps arise naturally in

Henry Kyburg repeatedly for his interval probabilities, though he and Isaac Levi
also give other reasons for intervals, or sets of distributions, representing states of belief.

Issues with imprecise probabilities

One issue with imprecise probabilities is that there is often an independent degree of caution or boldness inherent in the use of one interval, rather than a wider or narrower one. This may be a degree of confidence, degree of

fuzzy membership
, or threshold of acceptance. This is not as much of a problem for intervals that are lower and upper bounds derived from a set of probability distributions, e.g., a set of priors followed by conditionalization on each member of the set. However, it can lead to the question why some distributions are included in the set of priors and some are not.

Another issue is why one can be precise about two numbers, a lower bound and an upper bound, rather than a single number, a point probability. This issue may be merely rhetorical, as the robustness of a model with intervals is inherently greater than that of a model with point-valued probabilities. It does raise concerns about inappropriate claims of precision at endpoints, as well as for point values.

A more practical issue is what kind of decision theory can make use of imprecise probabilities.[31] For fuzzy measures, there is the work of Ronald R. Yager.[32] For convex sets of distributions, Levi's works are instructive.[33] Another approach asks whether the threshold controlling the boldness of the interval matters more to a decision than simply taking the average or using a Hurwicz decision rule.[34] Other approaches appear in the literature.[35][36][37][38]

See also

References

  1. ^ Kolmogorov, A. N. (1950). Foundations of the Theory of Probability. New York: Chelsea Publishing Company.
  2. ^ a b de Finetti, Bruno (1974). Theory of Probability. New York: Wiley.
  3. ^ a b c Boole, George (1854). An investigation of the laws of thought on which are founded the mathematical theories of logic and probabilities. London: Walton and Maberly.
  4. ^ Smith, Cedric A. B. (1961). "Consistency in statistical inference and decision". Journal of the Royal Statistical Society. B (23): 1–37.
  5. ^ a b c Williams, Peter M. (1975). Notes on conditional previsions. School of Math. and Phys. Sci., Univ. of Sussex.
  6. ^ .
  7. ^ .
  8. ^ Denneberg, Dieter (1994). Non-additive Measure and Integral. Dordrecht: Kluwer.
  9. ^ .
  10. ^ a b Weichselberger, K. (2001). Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung I - Intervallwahrscheinlichkeit als umfassendes Konzept. Heidelberg: Physica.
  11. ^ a b c Keynes, John Maynard (1921). A Treatise on Probability. London: Macmillan And Co.
  12. ^ "Imprecise Probabilities > Historical appendix: Theories of imprecise belief (Stanford Encyclopedia of Philosophy)".
  13. ^ Kuznetsov, Vladimir P. (1991). Interval Statistical Models. Moscow: Radio i Svyaz Publ.
  14. ^ Ruggeri, Fabrizio (2000). Robust Bayesian Analysis. D. Ríos Insua. New York: Springer.
  15. .
  16. .
  17. .
  18. ^ .
  19. ^ .
  20. .
  21. ^ Shafer, Glenn; Vladimir Vovk (2001). Probability and Finance: It's Only a Game!. Wiley.
  22. .
  23. ^ Dubois, Didier; Henri Prade (1985). Théorie des possibilité. Paris: Masson.
  24. .
  25. .
  26. .
  27. .
  28. .
  29. ^ Ferson, Scott; Vladik Kreinovich; Lev Ginzburg; David S. Myers; Kari Sentz (2003). "Constructing Probability Boxes and Dempster-Shafer Structures". SAND2002-4015. Sandia National Laboratories. Archived from the original on 2011-07-22. Retrieved 2009-09-23.
  30. .
  31. .
  32. .
  33. .
  34. .
  35. .
  36. .
  37. ].
  38. .

External links