Corpus linguistics

Source: Wikipedia, the free encyclopedia.

Corpus linguistics is the

study of a language as that language is expressed in its text corpus (plural corpora), its body of "real world" text. Corpus linguistics proposes that a reliable analysis of a language is more feasible with corpora collected in the field—the natural context ("realia") of that language—with minimal experimental interference. The large collections of text allow linguistics to run quantitative analyses on linguistic concepts, otherwise harder to quantify.[1]

The text-corpus method uses the body of texts written in any natural language to derive the set of abstract rules which govern that language. Those results can be used to explore the relationships between that subject language and other languages which have undergone a similar analysis. The first such corpora were manually derived from source texts, but now that work is automated.

Corpora have not only been used for linguistics research, they have also been used to compile

, published in 1985.

Experts in the field have differing views about the annotation of a corpus. These views range from

University College, London), who advocate annotation as allowing greater linguistic understanding through rigorous recording.[3]

History

Some of the earliest efforts at grammatical description were based at least in part on corpora of particular religious or cultural significance. For example,

Prātiśākhya literature described the sound patterns of Sanskrit as found in the Vedas
, and
classical Sanskrit was based at least in part on analysis of that same corpus. Similarly, the early Arabic grammarians paid particular attention to the language of the Quran. In the Western European tradition, scholars prepared concordances
to allow detailed study of the language of the Bible and other canonical texts.

English corpora

A landmark in modern corpus linguistics was the publication of Computational Analysis of Present-Day American English in 1967. Written by Henry Kučera and W. Nelson Francis, the work was based on an analysis of the Brown Corpus, which was a contemporary compilation of about a million American English words, carefully selected from a wide variety of sources.[4] Brown's corpus was the first computerized corpus designed for linguistic research.[5] Kučera and Francis subjected the Brown Corpus to a variety of computational analyses and then combined elements of linguistics, language teaching, psychology, statistics, and sociology to create a rich and variegated opus. A further key publication was Randolph Quirk's "Towards a description of English Usage" in 1960[6] in which he introduced the Survey of English Usage. Quirk's corpus was the first modern corpus to be built with the purpose of representing the whole language.[7]

Shortly thereafter, Boston publisher

Houghton-Mifflin approached Kučera to supply a million-word, three-line citation base for its new American Heritage Dictionary, the first dictionary
compiled using corpus linguistics. The AHD took the innovative step of combining prescriptive elements (how language should be used) with descriptive information (how it actually is used).

Other publishers followed suit. The British publisher Collins'

English as a foreign language, was compiled using the Bank of English. The Survey of English Usage Corpus was used in the development of one of the most important Corpus-based Grammars, which was written by Quirk et al. and published in 1985 as A Comprehensive Grammar of the English Language.[8]

The

Oxford and Lancaster) and the British Library. For contemporary American English, work has stalled on the American National Corpus, but the 400+ million word Corpus of Contemporary American English
(1990–present) is now available through a web interface.

The first computerized corpus of transcribed spoken language was constructed in 1971 by the Montreal French Project,[9] containing one million words, which inspired Shana Poplack's much larger corpus of spoken French in the Ottawa-Hull area.[10]

Multilingual Corpora

In the 1990s, many of the notable early successes on statistical methods in natural-language programming (NLP) occurred in the field of machine translation, due especially to work at IBM Research. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government.

There are corpora in non-European languages as well. For example, the National Institute for Japanese Language and Linguistics in Japan has built a number of corpora of spoken and written Japanese. Sign language corpora have also been created using video data.[11]

Ancient languages corpora

Besides these corpora of living languages, computerized corpora have also been made of collections of texts in ancient languages. An example is the Andersen-Forbes database of the Hebrew Bible, developed since the 1970s, in which every clause is parsed using graphs representing up to seven levels of syntax, and every segment tagged with seven fields of information.[12][13] The Quranic Arabic Corpus is an annotated corpus for the Classical Arabic language of the Quran. This is a recent project with multiple layers of annotation including morphological segmentation, part-of-speech tagging, and syntactic analysis using dependency grammar.[14] The Digital Corpus of Sanskrit (DCS) is a "Sandhi-split corpus of Sanskrit texts with full morphological and lexical analysis... designed for text-historical research in Sanskrit linguistics and philology."[15]

Corpora from specific fields

Besides pure linguistic inquiry, researchers had begun to apply corpus linguistics to other academic and professional fields, such as the emerging sub-discipline of

ACL Anthology and Google Scholar metadata.[17] Corpora can also aid in translation efforts[18] or in teaching foreign languages.[19]

Methods

Corpus linguistics has generated a number of research methods, which attempt to trace a path from data to theory. Wallis and Nelson (2001)[20] first introduced what they called the 3A perspective: Annotation, Abstraction and Analysis.

Most lexical corpora today are part-of-speech-tagged (POS-tagged). However even corpus linguists who work with 'unannotated plain text' inevitably apply some method to isolate salient terms. In such situations annotation and abstraction are combined in a lexical search.

The advantage of publishing an annotated corpus is that other users can then perform experiments on the corpus (through corpus managers). Linguists with other interests and differing perspectives than the originators' can exploit this work. By sharing data, corpus linguists are able to treat the corpus as a locus of linguistic debate and further study.[21]

See also

Notes and references

  1. , retrieved 31 October 2023
  2. ^ Sinclair, J. 'The automatic analysis of corpora', in Svartvik, J. (ed.) Directions in Corpus Linguistics (Proceedings of Nobel Symposium 82). Berlin: Mouton de Gruyter. 1992.
  3. ^ Wallis, S. 'Annotation, Retrieval and Experimentation', in Meurman-Solin, A. & Nurmi, A.A. (ed.) Annotating Variation and Change. Helsinki: Varieng, [University of Helsinki]. 2007. e-Published
  4. .
  5. , retrieved 31 October 2023
  6. .
  7. , retrieved 31 October 2023
  8. .
  9. ^ Sankoff, David; Sankoff, Gillian (1973). Darnell, R. (ed.). "Sample survey methods and computer-assisted analysis in the study of grammatical variation". Canadian Languages in Their Social Context. Edmonton: Linguistic Research Incorporated: 7–63.
  10. .
  11. ^ "National Center for Sign Language and Gesture Resources at B.U." www.bu.edu. Retrieved 31 October 2023.
  12. ^ Andersen, Francis I.; Forbes, A. Dean (2003), "Hebrew Grammar Visualized: I. Syntax", Ancient Near Eastern Studies, vol. 40, pp. 43–61 [45]
  13. ^ Dukes, K., Atwell, E. and Habash, N. 'Supervised Collaboration for Syntactic Annotation of Quranic Arabic'. Language Resources and Evaluation Journal. 2011.
  14. ^ "Digital Corpus of Sanskrit (DCS)". Retrieved 28 June 2022.
  15. .
  16. .
  17. , retrieved 31 October 2023
  18. ^ Mainz, Johannes Gutenberg-Universität. "Corpus Linguistics | ENGLISH LINGUISTICS". Johannes Gutenberg-Universität Mainz (in German). Retrieved 31 October 2023.
  19. ^ Wallis, S. and Nelson G. Knowledge discovery in grammatically analysed corpora. Data Mining and Knowledge Discovery, 5: 307–340. 2001.
  20. ^ Baker, Paul; Egbert, Jesse, eds. (2016). Triangulating Methodological Approaches in Corpus-Linguistic Research. New York: Routledge.

Further reading

Books

Book series

Book series in this field include:

Journals

There are several international peer-reviewed journals dedicated to corpus linguistics, for example:

External links