Belief aggregation

Source: Wikipedia, the free encyclopedia.

Belief aggregation,[1] also called risk aggregation,[2] opinion aggregation[3] or probabilistic opinion pooling,[4] is a process in which different probability distributions, produced by different experts, are combined to yield a single probability distribution.

Background

Expert opinions are often uncertain. Rather than saying e.g. "it will rain tomorrow", a weather expert may say "it will rain with probability 70% and be sunny with probability 30%". Such a statement is called a belief. Different experts may have different beliefs; for example, a different weather expert may say "it will rain with probability 60% and be sunny with probability 40%". In other words, each expert has a subjeciive probability distribution over a given set of outcomes.

A belief aggregation rule is a function that takes as input two or more probability distributions over the same set of outcomes, and returns a single probability distribution over the same space.

Applications

Documented applications of belief aggregation include:

During COVID-19, the European Academy of Neurology developed an ad-hoc three-round voting method to aggregate expert opinions and reach a consensus.[9]

Common rules

Common belief aggregation rules include:

  • Linear aggregation (also called average voting rule) - selecting the weighted or unweighted arithmetic mean of the experts' reports.
  • Geometric aggregation - selecting the weighted or unweighted geometric mean of the reports.
  • Multiplicative aggregation - selecting the product of probabilities.

Dietrich and List[4] present axiomatic characterizations of each class of rules. They argue that that linear aggregation can be justified “procedurally” but not “epistemically”, while the other two rules can be justified epistemically. Geometric aggregation is justified when the experts' beliefs are based on the same information, and multiplicative aggregation is justified when the experts' beliefs are based on private information.

Properties of belief aggregation rules

A belief aggregation rule should arguably satisfy some desirable properties, or axioms:

  • Zero preservation[3] means that, if all experts agree that an event has zero probability, then the same should hold in the aggregated distribution. An equivalent axiom is consensus preservation[10] or certainty preservation[1], which means that, if all experts agree that an event has probability 1, then the same should hold in the aggregated distribution. This is a basic axiom that is satisfied by linear, geometric and multiplicative aggregation, as well as many others.
  • Plausibility preservation means that, if all experts agree that an event has a positive probability, then the same should hold in the aggregated distribution. This axiom is satisfied by linear aggregation.
  • Proportionality means that, if each expert assigns probability 1 to a single outcome, then the aggregated distribution is the average (or the weighted average) of the expert beliefs. This axiom is satisfied by linear aggregation.
  • Diversity is stronger than proportionality. It means that the support of the aggregated distribution contains the supports of all experts' beliefs. In other words, if any event has a positive probability to at least one expert, that it has a positive probability to society. This axiom is satisfied by linear aggregation.

Truthful aggregation rules with money

Most literature on belief aggregation assumes that the experts report their beliefs honestly, as their main goal is to help the decision-maker get to the truth. In practice, experts may have strategic incentives. For example, the

truthful mechanism
for belief aggregation could be useful.

In some settings, it is possible to pay the experts a certain sum of money, depending both on their expressed belief and on the realized outcome. Careful design of the payment function (often called a "scoring rule") can lead to a truthful mechanism. Various truthful scoring rules exist.[12][13][14][15]

Truthful aggregation rules without money

In some settings, monetary transfers are not possible. For example, the realized outcome may happen in the far future, or a wrong decision may be catastrophic.

To develop truthful mechanisms, one must make assumptions about the experts' preferences over the set of accepted probability-distributions. If the space of possible preferences is too rich, then strong impossibility results imply that the only truthful mechanism is the dictatorship mechanism (see Gibbard–Satterthwaite theorem).

Single-peaked preferences

A useful domain restriction is that the experts have single-peaked preferences. An aggregation rule is called one-dimensional strategyproof (1D-SP) if whenever all experts have single-peaked preferences, and submit their peaks to the aggregation rule, no expert can impose a strictly better aggregated distribution by reporting a false peak. An equivalent property is called uncompromisingness:[16] it says that, if the belief of expert i is smaller than the aggregate distribution, and i changes his report, then the aggregate distribution will be weakly larger; and vice-versa.

Moulin[17] proved a characterization of all 1D-SP rules, as well as the following two characterizations:

  • A rule is anonymous and 1D-SP for all single-peaked preferences iff it is equivalent to a median voting rule with at most n+1 "phantoms".
  • A rule is anonymous, 1D-SP and Pareto-efficient for all single-peaked preferences iff it is equivalent to a median voting rule with at most n-1 phantoms.

Jennings, Laraki, Puppe and Varloot[18] present new characterizations of strategyproof mechanisms with single-peaked preferences.

Single-peaked preferences of the pdf

A further restriction of the single-peaked domain is that agents have single-peaked preferences with

L1 metric on the probability density function. That is: for each agent i, there is an "ideal" probability distribution pi, and his utility from a selected probability distribution p* is minus the L1 distance between pi and p*. An aggregation rule is called L1-metric-strategyproof (L1-metric-SP) if whenever all experts have single-peaked preferences with L1 metric, and submit their peaks to the aggregation rule, no expert can impose a strictly better aggregated distribution by reporting a false peak. Several L1-metric-SP aggregation rules were suggested in the context of budget-proposal aggregation
:

  • Goel, Krishnaswamy and Sakshuwong[19] proved the existence of a Pareto optimal aggregation rule that is L1-metric-SP;
  • Freeman, Pennock, Peters and Vaughan[20] presented a rule called moving phantoms, which is L1-metric-SP and satisfies a fairness property (but it is not Pareto-optimal). They also presented a family of L1-metric-SP rules based on the median rule.

However, such preferences may not be a good fit for belief aggregation, as they are neutral - they do not distinguish between different outcomes. For example, suppose there are three outcomes, and the expert's belief pi assigns 100% to outcome 1. Then, the L1 metric between pi and "100% outcome 2" is 2, and the L1 metric between pi and "100% outcome 3" is 2 too. The same is true for any neutral metric. This makes sense when 1,2,3 are budget items. However, if these outcomes describe the potential strength of an earthquake in the Richter scale, then the distance between pi to "100% outcome 2" should be much smaller than the distance to "100% outcome 3".

Single-peaked preferences on the cdf

Varloot and Laraki[1] study a different preference domain, in which the outcomes are linearly ordered, and the preferences are single-peaked in the space of cumulative distribution function (cdf). That is: each agent i has an ideal cumulative distribution function ci, and his utility depends negatively on the distance between ci and the accepted distribution c*. They define a new concept called level-strategyproofness (Level-SP), which is relevant when society's decision is based on the question of whether the probability of some event is above or below a given threshold. Level-SP provably implies strategyproofness for a rich class of cdf-single-peaked preferences. They characterize two new aggregation rules:

  • The order-cumulative rules are the only aggregation rules that satisfy Level-SP, anonymity, certainty-preservation and plasubility-preservation. A special case of this family is the middlemost cumulative, which is an order-cumulative based on the median.
    • However, these rules are not diverse, for example: if three experts report "99% outcome 1" and one expert reports "99% outcome 2", then every order-cumulative rule will choose either "99% outcome 1" of "99% outcome 2"; however, an outcome such as "75% outcome 1 and 25% outcome 2" is more reasonable.
  • The proportional-cumulative rule is the only aggregation rule that satisfies Level-SP and proportionality. It also handles profiles with dominations (where the cdf of each agent i is either entirely above or entirely below the cdf of any other agent j) in a natural way. However, it violates plausibility-preservation.

Other results include:

  • There is no aggregation rule that satisfies diversity, Level-SP and unanimity.
  • When there are at least 4 outcomes, the only rules that satisfy Level-SP, L1-metric-SP and certainty-preservation are dictatorships (there are rules that satisfy Level-SP and L1-metric-SP, but not certainty-preservation; with 3 outcomes, every level-SP rule is also L1-metric-SP).
  • Most results can be extended to assign different weights to different experts (representing their level of expertise).
  • A new voting method: majority judgement with uncertainty (MJU). It is a variant of
    majority judgement
    which allows voters to express uncertainty about the qualities of each candidate.

Software

ANDURIL[21] is a MATLAB toolbox for belief aggregation.

See also

  • Ensemble forecasting - instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced, aiming to give an indication of the range of possible future states of the atmosphere.
  • Aggregative Contingent Estimation Program - a program of the Office of Incisive Analysis that ran between 2010 and 2015.
  • Data assimilation - a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations.
  • Scoring rule - can be used to incentivize truthful belief aggregation.
  • Sensor fusion - combining sensor data from disparate sources.
  • Budget-proposal aggregation - a similar problem in which each expert reports his ideal budget-allocation, and the goal is to aggregate the reports to a common budget-allocation.
  • Belief merging - similar to belief aggregation, except that the beliefs are given by logical formulae rather than by probability distributions.

Further reading

Several books on related topics are available.[22][23][3]

References