Posterior probability
Part of a series on |
Bayesian statistics |
---|
Posterior = Likelihood × Prior ÷ Evidence |
Background |
Model building |
Posterior approximation |
Estimators |
|
Evidence approximation |
Model evaluation |
|
The posterior probability is a type of
In the context of
Definition in the distributional case
In variational Bayesian methods, the posterior probability is the probability of the parameters given the evidence , and is denoted .
It contrasts with the likelihood function, which is the probability of the evidence given the parameters: .
The two are related as follows:
Given a prior belief that a probability distribution function is and that the observations have a likelihood , then the posterior probability is defined as
- ,[6]
where is the normalizing constant and is calculated as
for continuous , or by summing over all possible values of for discrete .[7]
The posterior probability is therefore
Example
Suppose there is a school with 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; all boys wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.
The event is that the student observed is a girl, and the event is that the student observed is wearing trousers. To compute the posterior probability , we first need to know:
- , or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the percentage of girls among the students is 40%, this probability equals 0.4.
- , or the probability that the student is not a girl (i.e. a boy) regardless of any other information ( is the complementary event to ). This is 60%, or 0.6.
- , or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5.
- , or the probability of the student wearing trousers given that the student is a boy. This is given as 1.
- , or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since (via the law of total probability), this is .
Given all this information, the posterior probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:
An intuitive way to solve this is to assume the school has N students. Number of boys = 0.6N and number of girls = 0.4N. If N is sufficiently large, total number of trouser wearers = 0.6N+ 50% of 0.4N. And number of girl trouser wearers = 50% of 0.4N. Therefore, in the population of trousers, girls are (50% of 0.4N)/(0.6N+ 50% of 0.4N) = 25%. In other words, if you separated out the group of trouser wearers, a quarter of that group will be girls. Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes-theorem problem can be solved in this way.[9]
Calculation
The posterior probability distribution of one
gives the posterior probability density function for a random variable given the data , where
- is the prior density of ,
- is the likelihood function as a function of ,
- is the normalizing constant, and
- is the posterior density of given the data .[10]
Credible interval
Posterior probability is a conditional probability conditioned on randomly observed data. Hence it is a random variable. For a random variable, it is important to summarize its amount of uncertainty. One way to achieve this goal is to provide a credible interval of the posterior probability.[11]
Classification
In
See also
- Prediction interval
- Bernstein–von Mises theorem
- Probability of success
- Bayesian epistemology
- Metropolis–Hastings algorithm
References
- ISBN 978-1-4739-1636-4.
- hdl:2123/9107.
- ^ Etz, Alex (2015-07-25). "Understanding Bayes: Updating priors via the likelihood". The Etz-Files. Retrieved 2022-08-18.
- ISBN 978-1-4398-6248-3.
- ISBN 0-471-63729-7.
- ISBN 978-0-387-31073-2.
- ISBN 978-1-4398-4095-5.)
{{cite book}}
: CS1 maint: multiple names: authors list (link - ^ Ross, Kevin. Chapter 8 Introduction to Continuous Prior and Posterior Distributions | An Introduction to Bayesian Reasoning and Methods.
- ^ "Bayes' theorem - C o r T e x T". sites.google.com. Retrieved 2022-08-18.
- ^ "Posterior probability - formulasearchengine". formulasearchengine.com. Retrieved 2022-08-19.
- ^ Clyde, Merlise; Çetinkaya-Rundel, Mine; Rundel, Colin; Banks, David; Chai, Christine; Huang, Lizzy. Chapter 1 The Basics of Bayesian Statistics | An Introduction to Bayesian Thinking.
- S2CID 199007973.
Further reading
- Lancaster, Tony (2004). An Introduction to Modern Bayesian Econometrics. Oxford: Blackwell. ISBN 1-4051-1720-6.
- Lee, Peter M. (2004). Bayesian Statistics : An Introduction (3rd ed.). ISBN 0-340-81405-5.