Newcomb's paradox
Predicted choice Actual
choice |
A + B (B has $0) |
B (B has $1,000,000) |
---|---|---|
A + B | $1,000 | $1,001,000 |
B | $0 | $1,000,000 |
In philosophy and mathematics, Newcomb's paradox, also known as Newcomb's problem, is a thought experiment involving a game between two players, one of whom is able to predict the future.
Newcomb's paradox was created by
The problem
There is a reliable predictor, another player, and two boxes designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:[4]
- Box A is transparent and always contains a visible $1,000.
- Box B is opaque, and its content has already been set by the predictor:
- If the predictor has predicted that the player will take both boxes A and B, then box B contains nothing.
- If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.
The player does not know what the predictor predicted or what box B contains while making the choice.
Game-theory strategies
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[4] The problem continues to divide philosophers today.[5][6] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).[7]
Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout.
- Considering the expected utility when the probability of the predictor being right is certain or near-certain, the player should choose box B. This choice statistically maximizes the player's winnings, setting them at about $1,000,000 per game.
- Under the dominance principle, the player should choose the strategy that is always better; choosing both boxes A and B will always yield $1,000 more than only choosing B. However, the expected utility of "always $1,000 more than B" depends on the statistical payout of the game; when the predictor's prediction is almost certain or certain, choosing both A and B sets player's winnings at about $1,000 per game.
David Wolpert and Gregory Benford point out that paradoxes arise when not all relevant details of a problem are specified, and there is more than one "intuitively obvious" way to fill in those missing details. They suggest that in the case of Newcomb's paradox, the conflict over which of the two strategies is "obviously correct" reflects the fact that filling in the details in Newcomb's problem can result in two different noncooperative games, and each of the strategies is reasonable for one game but not the other. They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality, determinism, and free will.[4]
Causality and free will
Predicted choice Actual
choice |
A + B | B |
---|---|---|
A + B | $1,000 | Impossible |
B | Impossible | $1,000,000 |
Causality issues arise when the predictor is posited as
Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous – a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street.[11]
Andrew Irvine argues that the problem is structurally isomorphic to Braess's paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.[12]
Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor's prediction, for example, by committing to taking only one box. So players who are still in the first stage should simply commit themselves to one-boxing.
Burgess readily acknowledges that those who are in the second stage should take both boxes. As he emphasises, however, for all practical purposes that is beside the point; the decisions "that determine what happens to the vast bulk of the money on offer all occur in the first [stage]".[13] So players who find themselves in the second stage without having already committed to one-boxing will invariably end up without the riches and without anyone else to blame. In Burgess's words: "you've been a bad boy scout"; "the riches are reserved for those who are prepared".[14]
Burgess has stressed that – pace certain critics (e.g., Peter Slezak) – he does not recommend that players try to trick the predictor. Nor does he assume that the predictor is unable to predict the player's thought process in the second stage.[15] Quite to the contrary, Burgess analyses Newcomb's paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values – whether implicitly or explicitly – that are entirely consistent at all times. To treat the paradox as a common cause problem is simply to assume that the player's decision and the predictor's prediction have a common cause. (That common cause may be, for example, the player's brain state at some particular time before the second stage begins.)
It is also notable that Burgess highlights a similarity between Newcomb's paradox and the Kavka's toxin puzzle. In both problems one can have a reason to intend to do something without having a reason to actually do it. Recognition of that similarity, however, is something that Burgess actually credits to Andy Egan.[16]
Consciousness and simulation
Newcomb's paradox can also be related to the question of
Fatalism
Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome.[18]
Extensions to Newcomb's problem
Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature.[1] For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed.[19]
The meta-Newcomb problem
Another related problem is the meta-Newcomb problem.[20] The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. There is also another predictor: a "meta-predictor" who has reliably predicted both the players and the predictor in the past, and who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision."
In this situation, a proponent of choosing both boxes is faced with the following dilemma: if the player chooses both boxes, the predictor will not yet have made its decision, and therefore a more rational choice would be for the player to choose box B only. But if the player so chooses, the predictor will already have made its decision, making it impossible for the player's decision to affect the predictor's decision.
See also
- Decision theory
- Causal decision theory
- Evidential decision theory
- Alternatives to causal and evidential decision theory
- Roko's basilisk
Notes
- ^ a b Robert Nozick (1969). "Newcomb's Problem and Two Principles of Choice" (PDF). In Rescher, Nicholas (ed.). Essays in Honor of Carl G. Hempel. Springer. Archived from the original (PDF) on 2019-03-31.
- ISBN 0-393-02023-1).
- ^ "Causal Decision Theory". Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Stanford University. Retrieved 3 February 2016.
- ^ S2CID 113227.
- ^ Bellos, Alex (28 November 2016). "Newcomb's problem divides philosophers. Which side are you on?". The Guardian. Retrieved 13 April 2018.
- ^ Bourget, D., Chalmers, D. J. (2014). "What do philosophers believe?" Philosophical Studies, 170(3), 465–500.
- ^ "PhilPapers Survey 2020".
- ^ Christopher Langan. "The Resolution of Newcomb's Paradox". Noesis (44).
- S2CID 143485859.
- JSTOR 2027068.
- ISBN 978-0262042338.
- .
- S2CID 28725419.
- S2CID 33405473.
- S2CID 28725419.
- S2CID 28725419.
- arXiv:math.ST/0608592.
- ^ Dummett, Michael (1996), The Seas of Language, Clarendon Press Oxford, pp. 352–358.
- S2CID 20417502.
- .
References
- Bar-Hillel, Maya; Margalit, Avishai (1972). "Newcomb's paradox revisited". British Journal for the Philosophy of Science. 23 (4): 295–304. JSTOR 686730.
- Campbell, Richmond and Sowden, Lanning, ed. (1985), Paradoxes of Rationality and Cooperation: Prisoners' Dilemma and Newcomb's Problem, Vancouver: University of British Columbia Press. (an anthology discussing Newcomb's Problem, with an extensive bibliography).
- Collins, John. "Newcomb's Problem", International Encyclopedia of the Social and Behavioral Sciences, Neil Smelser and Paul Baltes (eds.), Elsevier Science (2001).
- Gardner, Martin (1986). Knotted Doughnuts and Other Mathematical Entertainments. W. H. Freeman and Company. pp. 155–175. ISBN 0-7167-1794-8.
- Levi, Isaac (1982). "A Note on Newcombmania". Journal of Philosophy. 79 (6): 337–342. JSTOR 2026081. (An article discussing the popularity of Newcomb's problem.)