User:Barefoot Banana/sandbox
Part of a series on |
Linguistics |
---|
Portal |
Phonology is the branch of linguistics that studies the systematic organization of the units of language that do not have any meaning in and of themselves. For spoken languages, such units are phones, tones, features, or larger units such as syllables and other prosodic domains.[1] For sign languages, phonology investigates the constituent parts of signs. These are specifications for movement, location, and handshape.[2] [3]
What phonologists study and which position it take in within linguistics
Etymology and definition
The word phonology comes from
Phonology is usually distinguished from
Definitions of the field of phonology vary.
History
The earliest evidence for a systematic study of the sounds in a language appears in the 4th century BCE
The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar
An influential school of phonology in the interwar period was the
In 1968,
In 1976, John Goldsmith introduced autosegmental phonology.[16] Based on the phonological behaviour of tones, Goldsmith proposed that tones are not bound to the internal structure of segments, but rather that they are autonomous from segments (hence the theory's name) and exist on a separate tier: the tonal tier. In the representation, tones are then connected to segments via association lines. In this way, it is easy to represent one-to-many and many-to-one associations between tones on the tonal tier and segments on the segmental tier. This is useful for representing, among others, tonal spreading and (derived) contour tones. The concept of autonomous tiers was subsequently also applied to features. Until then, sequences of segments had been conceptualised as existing on a single linear string, but with autosegmental phonology, features could be represented on multiple tiers that were separate from the positional slots that they can associate to. A significant consequence of this is that certain processes could suddenly easily be represented as being local. An example is vowel harmony. Namely, from a strictly linear perspective, the vowels in a CVCV sequence are not adjacent, but they are adjacent in a representation where the vowel features exist on a tier that is separate from the consonant features. Eventually, autosegmental phonology led to feature geometries.[17] In feature geometries, features are organised in geometrical structures that have major nodes such as "Place of Articulation" and "Laryngeal" that each contain several features in their branches. Grouping features together into nodes crucially allows nodes to spread in their entirety rather than feature-by-feature. Additionally, nodes can dominate each other, allowing for complex geometries that make specific predictions about what phonological processes are possible.
During the 80's, two frameworks of phonology developed independently that have much in common: Dependency phonology and Government phonology. The impetus behind Dependency Phonology (DP), mostly developed by John Anderson, was the fundamental idea that a relation between linguistic units is asymmetrical with a dominating component (the head) and a dominated component (the dependent).[18] Government Phonology (GP), of which prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris, developed out of research into the internal structure of segments in the autosegmental era, and took inspiration from Government and Binding Syntax.[19] Both DP and GP attempt to bridge the gap between syntax and phonology. For DP, this stems from Anderson's Structural Analogy Assumption, which proposes that it is desirable as a null-hypothesis that the same mechanics operate in different parts of the grammar.[20] As such, DP uses notions such as complements and adjuncts. GP, on the other hand, being directly based on syntax, naturally employs several mechanisms that are analogous to syntactic operations, such as Proper Government and the Minimality Condition. A second similarity is that both frameworks reject syllabic constituents, DP through Head-Dependency Relations and rejecting contentless nodes, and GP through assuming lateral relations between segments in a string.[21] The result is that both frameworks reject e.g. an Onset node that can branch, so that analyses of syllable structure are alike. A third important similarity between DP and GP is the type of phonological primes that they use as subsegmental building blocks, pioneered by DP. These are most commonly known as "elements" (though they are called "components" in DP). The use of elements to represent subsegmental structure, as opposed to the more widely used distinctive features, is technically also possible outside of DP and GP and, in fact, Element Theory has developed into a self-contained theory.[22] Elements are different from features in a number of ways. First, they are monovalent/unary, i.e. either present or fully absent in a represenation, such that phonological processes cannot make reference to the lack of an element. Second, elements have multiple phonetic realisations that depend on their headedness status (which is similar but not identical between DP and GP). Third, formal definitions of elements are based on acoustics rather than on articulation/perception. Fourth, consonants and vowels are always represented by the same set of elements. Despite the common ground in terms of structural analogies between syntax and phonology, syllable structure, and subsegmental structure, there is one major difference that sets DP and GP apart: the importance of phonetic grounding. DP is substance-based, meaning that any phonological entity must bear some relation to its phonetic realisation and cannot be "empty". In stark opposition of this, GP assumes that only phonological behaviour can be used as evidence to support hypotheses about phonological structure. Phonology and phonetics are seen as separate modules which entails that phonological structure needs transduced or "translated" into phonetic structure. As such, the relationship between phonology and phonetics can, in principle, be as arbitrary and language-specific as the relationship between a phonological form and its lexical meaning. This means that phonological units such as elements have a very liberal phonetic implementation.
In 1987, a small conference was held at the Ohio State University that would result in the launch of a new approach to doing phonological research: Laboratory Phonology. Laboratory Phonology is essentially the enterprise of addressing phonological questions through experimental work.[23] Throughout most of the 20th century, phonetics and phonology diverged as branches of linguistics. Phonetic mechanisms were assumed to be universal and gradient, whereas phonology was assumed to be language-specific (hence acquired) and categorical. Furthermore, phonology within generative linguistics was also predominantly substance-free. However, increasingly advanced technology facilitated phonetic research, and linguists came to understand that phonetics is also language-specific and that there is at least some gradience within phonology (e.g. incomplete neutralisation). Laboratory Phonology was thus intended to bring phonetics and phonology under one roof again. The fundamental questions it poses are how cognitive representations are mapped onto physical motoric functions, what the division of labour is between phonetics and phonology, and which methods are appropriate to study them. An example of research within Laboratory Phonology would be using electroencephalography in a perception experiment to make inferences about the featural specification or lack thereof in segments.
Concurrent with Laboratory Phonology as an approach to phonology came the inception of Articulatory Phonology,[24] which developed in the same historical context. But whereas Laboratory Phonology is a theory-neutral approach, Articulatory Phonology, developed by Catherine Browman and Louis Goldstein, was a novel theory about the internal structure of segments.[25] Instead of segments having primes (features or elements), there are only articulatory "gestures" such as "closed velum" or "protruded lips". Gestures are coordinated in a certain way so that they overlap or are crucially sequential, thereby creating the illusion of segments, which have no status in Articulatory Phonology. In visualisations called "gstural scores", the phonological specification of gestures is shown throughout time as bars along a horizontal axis. Importantly, a gestural specification denotes an abstract articulatory goal, and is not itself a motoric event, nor does the goal need to be attained at all times. A gestural representation of speech leads to unique analyses in which e.g. assimilation can be directly modelled as gestural overlap, and can straightforwardly explain certain alternations that make no sense from the perspective of segment-internal phonological primes. This reduces complexity of the (morpho)phonology. Furthermore, given the importance accorded to gestures, Articulatory Phonology has focussed much on the temporal organisation and coordination between the movements of the articulators. Results in this line of research have interesting implications for syllable structure in particular.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory (OT), an architecture for the computation of phonology that is couched within a more general theory of the relationship between brain and mind.[26] In stark contrast to traditional generative phonology and its ordered rules, which was the dominant view of phonology up and until the 90's, OT proposed that phonology changes an underlying form (the "input") by selecting an optimal pronunciation for it (the "output"). Which pronunciation is optimal is determined by evaluating how badly each of the theoretically infinite possible pronunciations (the "candidates") violates a set of constraints. Crucially, each of the constraints is in principle violable, but any given constraint is more important than the combination of all lower-ranked constraints, so that the optimal candidate will be the one that satisfies a higher-ranked constraint than other candidates regardless of the optimal candidates' other violations. Classic OT executes this evaluation process in parallel, meaning that the computation cannot evaluate intermediate steps in the derivation. This is diametrically opposed to the step-by-step derivations of generative phonology where the output of one rule can be the input of a rule that is ordered after it. However, there exist versions of OT that involve serial processing, i.e. multiple consecutive evaluations.[27][28] Constraints were originally asserted to be universal, so that the only difference between languages was the ordered "ranking" in which the constraints were ranked. This was compatible with universal grammar. The OT approach was soon extended to morphology by John McCarthy and Alan Prince and has become a dominant trend in phonology.
Computational Phonology
An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with
Topics
Phonemes
One of the core tasks of the phonologist is to create an analysis of the phonemic inventory of a language. This is sometimes the first step in a phonological analysis, because phonemes are the building blocks of syllables, have features as their own building blocks, undergo phonological processes, and are the carriers of all suprasegmental properties of speech such as stress. A simple disagnostic for proving phonemehood is to find words that differ in meaning and phonetically differ in only one speech sound, i.e.
To find minimal pairs or establish the lack thereof, a phonologist needs a data set of accurately transcribed words, or they could attempt to elicit minimal pairs from a native speaker. How straightforward it is to establish minimal pairs will depend on several language-specific factors. If a language has many phonological processes, relationships between the underlying form and surface form will be obscured, so that a thorough examination of these processes needs to precede a definitive phonemic analysis. Otherwise, what appears to be a minimal pair on the phonological surface can be mistaken for an underlying contrast.[30]
There are several additional analytical complications that may arise. First, the absence of a minimal pair does not always prove that two speech sounds must belong to the same phoneme. If speech sounds are in complementary distribution and thus have no minimal pairs, then they are usually still considered different phonemes if they are phonetically very different. This is the case, for example, for /h/ and /ŋ/ in German.[31] But there do not exists any criteria for how phonetically distinct two sounds have to be in order to determine that they are different phonemes so that there are controversial cases such as Standard Mandarin /i/, which is in complementary distribution sounds that could be transcribed as [ɹ̺] and [ɻ].[32] It can also happen that two sounds are not in complementary distribution, do not have any minimal pairs to distinguish them, and yet are not interchangeable. This is the case for Dutch /ɣ/ (for speakers who have this sound). It does not contrast with its voiceless counterpart /x/, but both sounds occur word-initially and intervocalically without being predictable.[SOURCE] A second problem for phonemic analysis is that it is not always clear which allophone is conditioned and which one must be taken as underlying. [EXAMPLE]. Thirdly, loanwords may introduce new speech sounds or new phonotactic structures to the language, and there are no truly objective means to decide when loans must be accepted as being part of the sound system.
Despite the crucial role that phonemes have played since their conceptualisation, there is no complete consensus on whether phonemes are a convenient descriptive tool for linguists, or whether they are actual cognitive units that should have a place in formal theory. A framework in which phonemes are considered to be epiphonema is Articulatory Phonology[33], where phonemes/segments are created through the implementation of gestures that are not contained within or associated to phonemes in the way that features are. It has also been claimed that, when taking the logic of Autosegmental Phonology to its logical endpoint, segments are only anchoring points for features on a timing tier, so that there are no phonemes in the classical sense.[SOURCE] Neurocognitive research has likewise produced mixed results, with some studies[SOURCE] supporting the existence of phonemes whereas others find no evidence.[SOURCE]
Phonological and morphophonological processes
Features
Tone
Stress
Intonation
Syllable structure and phonotactics
Diachronic/historical phonology
Main approaches to representation
Phonology in sign languages
The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not speech-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sublexical units are not instantiated as speech sounds.
Theoretical frameworks in Phonology
- Autosegmental Phonology
- Element Theory: an approach to subsegmental phonology that assumes that the building blocks of speech sounds and tones are acoustic elements.[34]
- Exemplar Theory
- Generative Phonology
See also
- Accent (sociolinguistics)
- Absolute neutralisation
- Cherology
- English phonology
- List of phonologists (also Category: Phonologists)
- Morphophonology
- Phoneme
- Phonological development
- Phonological hierarchy
- Prosody (linguistics)
- Phonotactics
- Second language phonology
- Phonological rule
- Neogrammarian
Notes
- ISBN 978-0-521-19579-9.
- S2CID 60752232.
- Stokoe, William C.(1978) [1960]. Sign Language Structure: An outline of the visual communication systems of the American deaf. Department of Anthropology and Linguistics, University at Buffalo. Studies in linguistics, Occasional papers. Vol. 8 (2nd ed.). Silver Spring, MD: Linstok Press.
- ^ "Definition of PHONOLOGY". www.merriam-webster.com. Retrieved 3 January 2022.
- ^ ISBN 978-0-521-23728-4. Retrieved 8 January 2011Paperback ISBN 0-521-28183-0)
{{cite book}}
: CS1 maint: postscript (link - ISBN 978-0-631-19775-1. Retrieved 8 January 2011Paperback ISBN 0-631-19776-1)
{{cite book}}
: CS1 maint: postscript (link - ^ a b Trubetzkoy N., Grundzüge der Phonologie (published 1939), translated by C. Baltaxe as Principles of Phonology, University of California Press, 1969
- ISBN 978-1-4051-3083-7. Retrieved 8 January 2011Alternative ISBN 1-4051-3083-0)
{{cite book}}
: CS1 maint: postscript (link - ^ Bernards, Monique, "Ibn Jinnī", in: Encyclopaedia of Islam, THREE, Edited by: Kate Fleet, Gudrun Krämer, Denis Matringe, John Nawas, Everett Rowson. Consulted online on 27 May 2021 First published online: 2021 First print edition: 9789004435964, 20210701, 2021-4
- ^ ISSN 2629-172X. Retrieved 28 December 2021.
- ^ Anon (probably Louis Havet). (1873) "Sur la nature des consonnes nasales". Revue critique d'histoire et de littérature 13, No. 23, p. 368.
- ^ Roman Jakobson, Selected Writings: Word and Language, Volume 2, Walter de Gruyter, 1971, p. 396.
- ^ E. F. K. Koerner, Ferdinand de Saussure: Origin and Development of His Linguistic Thought in Western Studies of Language. A contribution to the history and theory of linguistics, Braunschweig: Friedrich Vieweg & Sohn [Oxford & Elmsford, N.Y.: Pergamon Press], 1973.
- ^ Chomsky, Noam; Halle, Morris (1968). The Sound Pattern of English. New York: Harper & Row.
- ^ Jakobson, Roman; Fant, Gunnar; Halle, Morris (1952). Preliminaries to Speech Analysis. Cambridge, MA: MIT Press.
- ^ Goldsmith, John A. (1976). Autosegmental Phonology (PhD thesis). MIT.
- ^ Clements, George N. (1985). "The geometry of phonological features". Phonology Yearbook. 2: 225–252.
- ISBN 978-1-315-67542-8.
- ISBN 978-1-315-67542-8.
- ^ Anderson, John M. (1987). "The tradition of structural analogy". In Steele, R.; Threadgold, T. (eds.). Language topics: Essays in honour of Michael Halliday. Amsterdam: John Benjamins. pp. 33–43.
- ISBN 978-1-315-67542-8.
- ISBN 0748637427.
- ISBN 978-1-315-67542-8.
- ^ Browman, Catherine P.; Goldstein, Louis M. (1986). "Towards an articulatory phonology". Phonology. 3 (1): 219-252.
- ISBN 978-1-315-67542-8.
- ISBN 978-0-262-19526-3.
- ^ McCarthy, John J. (2010). "An Introduction to Harmonic Serialism". Language and Linguistics Compass. 4 (10): 1001–1018.
- ISBN 113802581X.
- ^ Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
- ^ Snider, Keith (2014). "On Establishing Underlying Tonal Contrast". Language Documentation & Conservation. 8: 707-737.
- ISBN 0521192773.
- ISBN 978-0-19-921578-2.
- ^ Browman, Catherine P.; Goldstein, Louis M. (1986). "Towards an articulatory phonology". Phonology. 3 (1): 219-252.
- ISBN 0748637435.
Cite error: A list-defined reference named "HaleReiss2008" is not used in the content (see the help page).
Bibliography
- Anderson, John M.; and Ewen, Colin J. (1987). Principles of dependency phonology. Cambridge: Cambridge University Press.
- Bloch, Bernard (1941). "Phonemic overlapping". American Speech. 16 (4): 278–284. JSTOR 486567.
- Bloomfield, Leonard. (1933). Language. New York: H. Holt and Company. (Revised version of Bloomfield's 1914 An introduction to the study of language).
- Brentari, Diane (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.
- Chomsky, Noam. (1964). Current issues in linguistic theory. In J. A. Fodor and J. J. Katz (Eds.), The structure of language: Readings in the philosophy language (pp. 91–112). Englewood Cliffs, NJ: Prentice-Hall.
- Chomsky, Noam; and Halle, Morris. (1968). The sound pattern of English. New York: Harper & Row.
- S2CID 62237665.
- Clements, George N.; and Samuel J. Keyser. (1983). CV phonology: A generative theory of the syllable. Linguistic inquiry monographs (No. 9). Cambridge, MA: MIT Press. ISBN 0-262-03098-5(hbk).
- de Lacy, Paul, ed. (2007). The Cambridge Handbook of Phonology. Cambridge University Press. ISBN 978-0-521-84879-4. Retrieved 8 January 2011.
- Donegan, Patricia. (1985). On the Natural Phonology of Vowels. New York: Garland. ISBN 0-8240-5424-5.
- .
- Gilbers, Dicky; de Hoop, Helen (1998). "Conflicting constraints: An introduction to optimality theory". Lingua. 104 (1–2): 1–12. .
- Goldsmith, John A. (1979). The aims of autosegmental phonology. In D. A. Dinnsen (Ed.), Current approaches to phonological theory (pp. 202–222). Bloomington: Indiana University Press.
- Goldsmith, John A. (1989). Autosegmental and metrical phonology: A new synthesis. Oxford: Basil Blackwell.
- ISBN 978-1-4051-5768-1.
- Gussenhoven, Carlos & Jacobs, Haike. "Understanding Phonology", Hodder & Arnold, 1998. 2nd edition 2005.
- Hale, Mark; Reiss, Charles (2008). The Phonological Enterprise. Oxford, UK: Oxford University Press. ISBN 978-0-19-953397-8.
- Halle, Morris (1954). "The strategy of phonemics". Word. 10 (2–3): 197–209. .
- Halle, Morris. (1959). The sound pattern of Russian. The Hague: Mouton.
- Harris, Zellig. (1951). Methods in structural linguistics. Chicago: Chicago University Press.
- Hockett, Charles F. (1955). A manual of phonology. Indiana University publications in anthropology and linguistics, memoirs II. Baltimore: Waverley Press.
- ISBN 9780123547507.
- Jakobson, Roman (1949). "On the identification of phonemic entities". Travaux du Cercle Linguistique de Copenhague. 5: 205–213. .
- Jakobson, Roman; Fant, Gunnar; and Halle, Morris. (1952). Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press.
- Kaisse, Ellen M.; and Shaw, Patricia A. (1985). On the theory of lexical phonology. In E. Colin and J. Anderson (Eds.), Phonology Yearbook 2 (pp. 1–30).
- Kenstowicz, Michael. (1994). Phonology in generative grammar. Oxford: Basil Blackwell.
- Ladefoged, Peter. (1982). A course in phonetics (2nd ed.). London: Harcourt Brace Jovanovich.
- Martinet, André (1949). Phonology as functional phonetics. Oxford: Blackwell.
- Martinet, André (1955). Économie des changements phonétiques: Traité de phonologie diachronique. Berne: A. Francke S.A.
- Napoli, Donna Jo (1996). Linguistics: An Introduction. New York: Oxford University Press.
- Pike, Kenneth Lee (1947). Phonemics: A technique for reducing languages to writing. Ann Arbor: University of Michigan Press.
- Sandler, Wendy and Lillo-Martin, Diane. (2006). Sign language and linguistic universals. Cambridge: Cambridge University Press
- JSTOR 409004.
- Sapir, Edward (1933). "La réalité psychologique des phonémes". Journal de Psychologie Normale et Pathologique. 30: 247–265.
- de Saussure, Ferdinand. (1916). Cours de linguistique générale. Paris: Payot.
- Stampe, David. (1979). A dissertation on natural phonology. New York: Garland.
- JSTOR 409603.
- Trager, George L.; Bloch, Bernard (1941). "The syllabic phonemes of English". Language. 17 (3): 223–246. JSTOR 409203.
- Trubetzkoy, Nikolai. (1939). Grundzüge der Phonologie. Travaux du Cercle Linguistique de Prague 7.
- Twaddell, William F. (1935). On defining the phoneme. Language monograph no. 16. Language.
External links
- Media related to Phonology at Wikimedia Commons
- Phonetics and phonology at Curlie