Stochastic parrot

Source: Wikipedia, the free encyclopedia.

In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process.[1][2] The term was coined by Emily M. Bender[2][3] in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.[4]

Origin and definition

The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell").[4] They argued very large language models (LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and their potential for deception, and that they can't understand the concepts underlying what they learn.[5] Gebru and Mitchell lost their jobs at Google for publishing their criticisms, along with subsequent contributing events.[6][7] Their firing sparked a protest by Google employees.[6][7]

The word “stochastic” derives from the ancient Greek word “stokhastikos” meaning “based on guesswork,” or “randomly determined.”[8] The word "parrot" refers to the idea that LLMs merely repeat words without understanding their meaning.[8]

In their paper, Bender et al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots."[4]

According to the machine learning professionals Lindholm, Wahlstrom, Lindsten, and Schon, the analogy highlights two vital limitations:[1][9]

  • LLMs are limited by the data they are trained by and are simply stochastically repeating contents of datasets.
  • Because they are just making up outputs based on training data, LLMs do not understand if they are saying something incorrect or inappropriate.

Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".[1]

Subsequent usage

In July of 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper.[10] As of May 2023, the paper has been cited in 1,529 publications.[11] The term has been used in publications in the fields of law,[12] grammar,[13] narrative,[14] and humanities.[15] The authors continue to maintain their concerns about the dangers of chatbots based on large language models, such as GPT-4.[16]

Stochastic parrot is now a neologism used by AI skeptics to refer to machines' lack of understanding of the meaning of their outputs and is sometimes interpreted as a "slur against AI."[8] Its use expanded further when Sam Altman, CEO of Open AI, used the term ironically when he tweeted, "i am a stochastic parrot and so r u."[8] The term was then designated to be the 2023 AI-related Word of the Year for the American Dialect Society, even over the words "ChatGPT" and "LLM."[8][17]

The phrase is often referenced by some researchers to describe LLMs as pattern matchers that can generate plausible human-like text through their vast amount of training data, merely parroting in a stochastic fashion. However, other researchers argue that LLMs are, in fact, able to understand language.[18]

Debate

Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations.[18] The development of these new systems has deepened the discussion of the extent to which LLMs are simply “parroting.”

In the mind of a human being, words and language correspond to things one has experienced.[19] For LLMs, words correspond only to other words and patterns of usage fed into their training data.[20][21][4] Proponents of the idea of stochastic parrots thus conclude that LLMs are incapable of actually understanding language.[20][4]

The tendency of LLMs to pass off fake information as fact is held as support.[19] Called hallucinations, LLMs will occasionally synthesize information that matches some pattern, but not reality.[20][21][19] That LLMs can’t distinguish fact and fiction leads to the claim that they can’t connect words to a comprehension of the world, as language should do.[20][19]  Further, LLMs often fail to decipher complex or ambiguous grammar cases that rely on understanding the meaning of language.[20][21] As an example, borrowing from Saba et al., is the prompt:[20]

The wet newspaper that fell down off the table is my favorite newspaper. But now that my favorite newspaper fired the editor I might not like reading it anymore. Can I replace ‘my favorite newspaper’ by ‘the wet newspaper that fell down off the table’ in the second sentence?

LLMs respond to this in the affirmative, not understanding that the meaning of "newspaper" is different in these two contexts; it is first an object and second an institution.[20] Based on these failures, some AI professionals conclude they are no more than stochastic parrots.[20][19][4]

However, there is support for the claim that LLMs are more than that. LLMs do pass many tests for understanding well, such as the Super General Language Understanding Evaluation (SuperGLUE).[21][22]  Tests such as these and the smoothness of many LLM responses help as many as 51% of AI professionals believe they can truly understand language with enough data, according to 2022 survey.[21]

Another technique which has been applied to show this is termed "mechanistic interpretability". The idea is to reverse-engineer a large language model by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small transformer is trained to predict legal Othello moves. It has been found that it can make a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way.[23][24]

In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set.[25]

However, when tests created to test people for language comprehension are used to test LLMs, they sometimes result in false positives caused by spurious correlations within text data.[26] Models have shown examples of shortcut learning, which is when a system makes unrelated correlations within data instead of using human-like understanding.[27] One such experiment tested Google’s BERT LLM using the argument reasoning comprehension task. They asked it to choose between 2 statements, which is more consistent with an argument. Below is an example of one of these prompts:[21][28]

Argument: Felons should be allowed to vote. A person who stole a car at 17 should not be barred from being a full citizen for life.
Statement A: Grand theft auto is a felony.
Statement B: Grand theft auto is not a felony.

Researchers found that specific words such as “not” hint the model towards the correct answer, allowing near-perfect scores when included but resulting in random selection when hint words were removed.[21][28] This problem, and the known difficulties defining intelligence, causes some to argue all benchmarks that find understanding in LLMs are flawed, that they all allow shortcuts to fake understanding.

Without a reliable benchmark, researchers have found difficulties differentiating models between stochastic parrots and entities capable of understanding. When experimenting on ChatGPT-3, one scientist argued that the model was in between true human-like understanding and being a stochastic parrot.[18] He found that the model was coherent and informative when attempting to predict future events based on the information in the prompt.[18] ChatGPT-3 was frequently able to parse subtextual information from text prompts as well. However, the model frequently failed when tasked with logic and reasoning, especially when these prompts involved spatial awareness.[18] The model’s varying quality of responses indicates that LLM models may have a form of “understanding” in certain categories of tasks while acting as a stochastic parrot in others.[18]

See also

References

  1. ^ a b c Lindholm et al. 2022, pp. 322–3.
  2. ^ a b Uddin, Muhammad Saad (April 20, 2023). "Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations". Towards AI. Retrieved 2023-05-12.
  3. ^ Weil, Elizabeth (March 1, 2023). "You Are Not a Parrot". New York. Retrieved 2023-05-12.
  4. ^
    S2CID 232040593
    .
  5. ^ Haoarchive, Karen (4 December 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review. Archived from the original on 6 October 2021. Retrieved 19 January 2022.
  6. ^ a b Lyons, Kim (5 December 2020). "Timnit Gebru's actual paper may explain why Google ejected her". The Verge.
  7. ^ a b Taylor, Paul (2021-02-12). "Stochastic Parrots". London Review of Books. Retrieved 2023-05-09.
  8. ^ a b c d e Zimmer, Ben. "'Stochastic Parrot': A Name for AI That Sounds a Bit Less Intelligent". WSJ. Retrieved 2024-04-01.
  9. ^ Uddin, Muhammad Saad (April 20, 2023). "Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations". Towards AI. Retrieved 2023-05-12.
  10. ^ Weller (2021).
  11. ^ "Bender: On the Dangers of Stochastic Parrots". Google Scholar. Retrieved 2023-05-12.
  12. S2CID 258636427
    .
  13. ^ Bleackley, Pete; BLOOM (2023). "In the Cage with the Stochastic Parrot". Speculative Grammarian. CXCII (3). Retrieved 2023-05-13.
  14. S2CID 257207529
    .
  15. .
  16. ^ Goldman, Sharon (March 20, 2023). "With GPT-4, dangers of 'Stochastic Parrots' remain, say researchers. No wonder OpenAI CEO is a 'bit scared'". VentureBeat. Retrieved 2023-05-09.
  17. ISSN 0362-4331
    . Retrieved 2024-04-01.
  18. ^ .
  19. ^ .
  20. ^ .
  21. ^ .
  22. ].
  23. ^ Li, Kenneth (2023-01-21). "Large Language Model: world models or surface statistics?". The Gradient. Retrieved 2024-04-04.
  24. .
  25. ^

Works cited

Further reading

External links