Wisdom of the crowd
The wisdom of the crowd is the collective opinion of a diverse and independent group of individuals rather than that of a single expert. This process, while not new to the Information Age, has been pushed into the mainstream spotlight by social information sites such as Quora, Reddit, Stack Exchange, Wikipedia, Yahoo! Answers, and other web resources which rely on collective human knowledge.[1] An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise.[2]
A large group's aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, but often superior to, the answer given by any of the individuals within the group.
Jury theorems from social choice theory provide formal arguments for wisdom of the crowd given a variety of more or less plausible assumptions. Both the assumptions and the conclusions remain controversial, even though the theorems themselves are not. The oldest and simplest is Condorcet's jury theorem (1785).
Examples
Aristotle is credited as the first person to write about the "wisdom of the crowd" in his work Politics.[3][4] According to Aristotle, "it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively, than those who are so, just as public dinners to which many contribute are better than those supplied at one man's cost".[5]
The classic wisdom-of-the-crowds finding involves point estimation of a continuous quantity. At a 1906 country fair in Plymouth, 800 people participated in a contest to estimate the weight of a slaughtered and dressed ox. Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds.[6] This has contributed to the insight in cognitive science that a crowd's individual judgments can be modeled as a probability distribution of responses with the median centered near the true value of the quantity to be estimated.[7]
In recent years, the "wisdom of the crowd" phenomenon has been leveraged in business strategy, advertising spaces, and also political research. Marketing firms aggregate consumer feedback and brand impressions for clients. Meanwhile, companies such as Trada invoke crowds to design advertisements based on clients' requirements.[8] Lastly, political preferences are aggregated to predict or nowcast political elections.[9][10]
Non-human examples are prevalent. For example, the
Higher-dimensional problems and modeling
Although classic wisdom-of-the-crowds findings center on point estimates of single continuous quantities, the phenomenon also scales up to higher-dimensional problems that do not lend themselves to aggregation methods such as taking the mean. More complex models have been developed for these purposes. A few examples of higher-dimensional problems that exhibit wisdom-of-the-crowds effects include:
- Combinatorial problems such as traveling salesman problem, in which participants must find the shortest route between an array of points. Models of these problems either break the problem into common pieces (the local decomposition method of aggregation) or find solutions that are most similar to the individual human solutions (the global similarity aggregation method).[2][12]
- Ordering problems such as the order of the U.S. presidents or world cities by population. A useful approach in this situation is
- Gaussian distributions.[17]
Surprisingly popular
In further exploring the ways to improve the results, a new technique called the "surprisingly popular" was developed by scientists at MIT's Sloan Neuroeconomics Lab in collaboration with Princeton University. For a given question, people are asked to give two responses: What they think the right answer is, and what they think popular opinion will be. The averaged difference between the two indicates the correct answer. It was found that the "surprisingly popular" algorithm reduces errors by 21.3 percent in comparison to simple majority votes, and by 24.2 percent in comparison to basic confidence-weighted votes where people express how confident they are of their answers and 22.2 percent compared to advanced confidence-weighted votes, where one only uses the answers with the highest average.[18]
Definition of crowd
This section possibly contains original research. (June 2016) |
In the context of wisdom of the crowd, the term crowd takes on a broad meaning. One definition characterizes a crowd as a group of people amassed by an open call for participation.[19] While crowds are often leveraged in online applications, they can also be utilized in offline contexts.[19] In some cases, members of a crowd may be offered monetary incentives for participation.[20] Certain applications of "wisdom of the crowd", such as jury duty in the United States, mandate crowd participation.[21]
Analogues with individual cognition: the "crowd within"
The insight that crowd responses to an estimation task can be modeled as a sample from a
Vul and Pashler (2008) asked participants for point estimates of continuous quantities associated with general world knowledge, such as "What percentage of the world's airports are in the United States?" Without being alerted to the procedure in advance, half of the participants were immediately asked to make a second, different guess in response to the same question, and the other half were asked to do this three weeks later. The average of a participant's two guesses was more accurate than either individual guess. Furthermore, the averages of guesses made in the three-week delay condition were more accurate than guesses made in immediate succession. One explanation of this effect is that guesses in the immediate condition were less independent of each other (an
Hourihan and Benjamin (2010) tested the hypothesis that the estimate improvements observed by Vul and Pashler in the delayed responding condition were the result of increased independence of the estimates. To do this Hourihan and Benjamin capitalized on variations in memory span among their participants. In support they found that averaging repeated estimates of those with lower memory spans showed greater estimate improvements than the averaging the repeated estimates of those with larger memory spans.[24]
Rauhut and Lorenz (2011) expanded on this research by again asking participants to make estimates of continuous quantities related to real world knowledge. In this case participants were informed that they would make five consecutive estimates. This approach allowed the researchers to determine, firstly, the number of times one needs to ask oneself in order to match the accuracy of asking others and then, the rate at which estimates made by oneself improve estimates compared to asking others. The authors concluded that asking oneself an infinite number of times does not surpass the accuracy of asking just one other individual. Overall, they found little support for a so-called "mental distribution" from which individuals draw their estimates; in fact, they found that in some cases asking oneself multiple times actually reduces accuracy. Ultimately, they argue that the results of Vul and Pashler (2008) overestimate the wisdom of the "crowd within" – as their results show that asking oneself more than three times actually reduces accuracy to levels below that reported by Vul and Pashler (who only asked participants to make two estimates).[25]
Müller-Trede (2011) attempted to investigate the types of questions in which utilizing the "crowd within" is most effective. He found that while accuracy gains were smaller than would be expected from averaging ones' estimates with another individual, repeated judgments lead to increases in accuracy for both year estimation questions (e.g., when was the thermometer invented?) and questions about estimated percentages (e.g., what percentage of internet users connect from China?). General numerical questions (e.g., what is the speed of sound, in kilometers per hour?) did not improve with repeated judgments, while averaging individual judgments with those of a random other did improve accuracy. This, Müller-Trede argues, is the result of the bounds implied by year and percentage questions.[26]
Van Dolder and Van den Assem (2018) studied the "crowd within" using a large database from three estimation competitions organised by Holland Casino. For each of these competitions, they find that within-person aggregation indeed improves accuracy of estimates. Furthermore, they also confirm that this method works better if there is a time delay between subsequent judgments. Even with considerable delay between estimates, between-person aggregation is more beneficial. The average of a large number of judgements from the same person is barely better than the average of two judgements from different people.[27]
Dialectical bootstrapping: improving the estimates of the "crowd within"
Herzog and Hertwig (2009) attempted to improve on the "wisdom of many in one mind" (i.e., the "crowd within") by asking participants to use dialectical bootstrapping. Dialectical bootstrapping involves the use of
Hirt and Markman (1995) found that participants need not be limited to a consider-the-opposite strategy in order to improve judgments. Researchers asked participants to consider-an-alternative – operationalized as any plausible alternative (rather than simply focusing on the "opposite" alternative) – finding that simply considering an alternative improved judgments.[29]
Not all studies have shown support for the "crowd within" improving judgments. Ariely and colleagues asked participants to provide responses based on their answers to true-false items and their confidence in those answers. They found that while averaging judgment estimates between individuals significantly improved estimates, averaging repeated judgment estimates made by the same individuals did not significantly improve estimates.[30]
Challenges and solution approaches
Wisdom-of-the-crowds research routinely attributes the superiority of crowd averages over individual judgments to the elimination of individual noise,[31] an explanation that assumes independence of the individual judgments from each other.[7][22] Thus the crowd tends to make its best decisions if it is made up of diverse opinions and ideologies.
Averaging can eliminate
Scott E. Page introduced the diversity prediction theorem: "The squared error of the collective prediction equals the average squared error minus the predictive diversity". Therefore, when the diversity in a group is large, the error of the crowd is small.[34]
Miller and Stevyers reduced the independence of individual responses in a wisdom-of-the-crowds experiment by allowing limited communication between participants. Participants were asked to answer ordering questions for general knowledge questions such as the order of U.S. presidents. For half of the questions, each participant started with the ordering submitted by another participant (and alerted to this fact), and for the other half, they started with a random ordering, and in both cases were asked to rearrange them (if necessary) to the correct order. Answers where participants started with another participant's ranking were on average more accurate than those from the random starting condition. Miller and Steyvers conclude that different item-level knowledge among participants is responsible for this phenomenon, and that participants integrated and augmented previous participants' knowledge with their own knowledge.[35]
Crowds tend to work best when there is a correct answer to the question being posed, such as a question about geography or mathematics.[36] When there is not a precise answer crowds can come to arbitrary conclusions.[37] Wisdom-of-the-crowd algorithms thrive when individual responses exhibit proximity and a symmetrical distribution around the correct, albeit unknown, answer. This symmetry allows errors in responses to cancel each other out during the averaging process. Conversely, these algorithms may falter when the subset of correct answers is limited, failing to counteract random biases. This challenge is particularly pronounced in online settings where individuals, often with varying levels of expertise, respond anonymously. Some "wisdom-of-the-crowd" algorithms tackle this issue using expectation–maximization voting techniques. The Wisdom-IN-the-crowd (WICRO) algorithm [33] offers a one-pass classification solution. It gauges the expertise level of individuals by assessing the relative "distance" between them. Specifically, the algorithm identifies experts by presuming that their responses will be relatively "closer" to each other when addressing questions within their field of expertise. This approach enhances the algorithm's ability to discern expertise levels in scenarios where only a small subset of participants possess proficiency in a given domain, mitigating the impact of potential biases that may arise during anonymous online interactions.</ref>.[33][38]
The wisdom of the crowd effect is easily undermined. Social influence can cause the average of the crowd answers to be inaccurate, while the geometric mean and the median are more robust.[39] This relies on knowing an individual's uncertainty and trust of their estimate. The average answer of individuals who are knowledgeable about a topic will vary from the average of individuals who know nothing of the topic. A simple average of knowledgeable and inexperienced opinions will be less accurate than one in which the weighting of the average is based on the uncertainty and trust of their answer.
Experiments run by the Swiss Federal Institute of Technology found that when a group of people were asked to answer a question together they would attempt to come to a consensus which would frequently cause the accuracy of the answer to decrease. One suggestion to counter this effect is to ensure that the group contains a population with diverse backgrounds.[37]
Research from the
See also
- Argumentum ad populum
- Bandwagon effect
- Collaborative software
- Collective intelligence
- Collective wisdom
- Conventional wisdom
- Crowdfunding
- Crowdsourcing
- Delphi method
- Dispersed knowledge
- Dollar voting
- Dunning–Kruger effect
- Emergence
- Ensemble forecasting
- The Good Judgment Project (forecasting project)
- Groupthink
- Human reliability
- Intrade
- Law of large numbers
- Linus's law
- Networked expertise
- Open source
- Pilot error
- Tyranny of the majority
- Vox populi
- The Wisdom of Crowds
References
- ISBN 0-13-600848-8.
- ^ PMID 22268680.
- ^ Ober, Josiah (September 2009). "An Aristotelian middle way between deliberation and independent-guess aggregation" (PDF). Princeton/Stanford Working Papers in Classics. Stanford, California: Stanford University.
- OCLC 752249923.
- ASIN B00JD13IJW.
- doi:10.1038/075450a0.
- ^ ISBN 978-0-385-50386-0.
- ISSN 0362-4331. Retrieved April 3, 2017.
- S2CID 153631270.
- .
- ^ Yong, Ed (January 31, 2013). "The Real Wisdom of the Crowds". Phenomena. Archived from the original on February 3, 2013. Retrieved April 2, 2017.
- ^ Yi, S.K.M., Steyvers, M., Lee, M.D., and Dry, M. (2010). Wisdom of Crowds in Minimum Spanning Tree Problems. Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum.
- PMID 22253187.
- ^ Lee, Michael D.; Steyvers, Mark; de Young, Mindy; Miller, Brent J. Carlson, L.; Hölscher, C.; Shipley, T. F. (eds.). "A model-based approach to measuring expertise in ranking tasks". Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, Texas: Cognitive Science Society.
- ^ Steyvers, Mark; Lee, Michael D.; Miller, Brent J.; Hemmer, Pernille (December 2009). "The Wisdom of Crowds in the Recollection of Order Information". Advances in Neural Information Processing Systems (22). Cambridge, Massachusetts: MIT Press: 1785–1793.
- ^ Miller, Brent J.; Hemmer, Pernille; Steyvers, Michael D.; Lee, Michael D. (July 2009). "The Wisdom of Crowds in Ordering Problems". Proceedings of the Ninth International Conference on Cognitive Modeling. Manchester, England: International Conference on Cognitive Modeling.
- ^ Zhang, S., and Lee, M.D., (2010). "Cognitive models and the wisdom of crowds: A case study using the bandit problem". In R. Catrambone, and S. Ohlsson (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society, pp. 1118–1123. Austin, TX: Cognitive Science Society.
- S2CID 4452604.
- ^ S2CID 10374568.
- PMID 16292279.
- ^ O'Donnell, Michael H. "Judge extols wisdom of juries". Idaho State Journal. Retrieved 2017-04-03.
- ^ S2CID 44718192.
- PMID 25120505.
- PMID 20565223.
- .
- S2CID 18966323.
- SSRN 3099179.
- S2CID 23695566.
- S2CID 145016943.
- PMID 10937317.
- SSRN 1616519.
- ^ Marcus Buckingham; Ashley Goodall. "The Feedback Fallacy". Harvard Business Review. No. March-April 2019.
- ^ a b c Ratner, N., Kagan, E., Kumar, P., & Ben-Gal, I. (2023). "Unsupervised classification for uncertain varying responses: The wisdom-in-the-crowd (WICRO) algorithm" (PDF). Knowledge-Based Systems, 272: 110551.
{{cite web}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link) - ISBN 978-0-691-13854-1.
- ^ Miller, B., and Steyvers, M. (in press). "The Wisdom of Crowds with Communication". In L. Carlson, C. Hölscher, & T.F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
- ^ "The Wisdom of Crowds". randomhouse.com.
- ^ a b Ball, Philip. "'Wisdom of the crowd': The myths and realities". Retrieved 2017-04-02.
- ^ Ghanaiem, A., Kagan, E., Kumar, P., Raviv, T., Glynn, P., & Ben-Gal, I. (2023). "Unsupervised Classification under Uncertainty: The Distance-Based Algorithm" (PDF). Mathematics, 11(23), 4784.
{{cite web}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link) - ^ "How Social Influence can Undermine the Wisdom of Crowd Effect". Proc. Natl. Acad. Sci., 2011.
- ISSN 0025-1909.
External links
The wisdom of the crowd (with Professor Marcus du Sautoy) on