Auditory phonetics

Source: Wikipedia, the free encyclopedia.

Auditory phonetics is the branch of phonetics concerned with the hearing of speech sounds and with speech perception. It thus entails the study of the relationships between speech stimuli and a listener's responses to such stimuli as mediated by mechanisms of the peripheral and central auditory systems, including certain areas of the brain. It is said to compose one of the three main branches of phonetics along with acoustic and articulatory phonetics,[1][2] though with overlapping methods and questions.[3]

Physical scales and auditory sensations

There is no direct connection between auditory sensations and the physical properties of sound that give rise to them. While the physical (acoustic) properties are objectively measurable, auditory sensations are subjective and can only be studied by asking listeners to report on their perceptions.[4] The table below shows some correspondences between physical properties and auditory sensations.

Physical property Auditory perception
amplitude or intensity loudness
fundamental frequency pitch
spectral structure sound quality
duration length

Segmental and suprasegmental

Auditory phonetics is concerned with both segmental (chiefly vowels and consonants) and prosodic (such as stress, tone, rhythm and intonation) aspects of speech. While it is possible to study the auditory perception of these phenomena without context, in continuous speech all these variables are processed in parallel with significant variability and complex interactions between them.[5][6] For example, it has been observed that vowels, which are usually described as different from each other in the frequencies of their formants, also have intrinsic values of fundamental frequency (and presumably therefore of pitch) that are different according to the height of the vowel. Thus open vowels typically have lower fundamental frequency than close vowels in a given context,[7] and vowel recognition is likely to interact with the perception of prosody.

In speech research

If there is a distinction to be made between auditory phonetics and speech perception, it is that the former is more closely associated with traditional non-instrumental approaches to

Kenneth L. Pike stated "Auditory analysis is essential to phonetic study since the ear can register all those features of sound waves, and only those features, which are above the threshold of audibility ... whereas analysis by instruments must always be checked against auditory reaction".[9] Herbert Pilch attempted to define auditory phonetics in such a way as to avoid any reference to acoustic parameters.[10] In the auditory analysis of phonetic data such as recordings of speech, it is clearly an advantage to have been trained in analytical listening. Practical phonetic training has since the 19th century been seen an essential foundation for phonetic analysis and for the teaching of pronunciation; it is still a significant part of modern phonetics. The best-known type of auditory training has been in the system of cardinal vowels; there is disagreement about the relative importance of auditory and articulatory factors underlying the system, but the importance of auditory training for those who are to use it is indisputable.[11]

Training in the auditory analysis of prosodic factors such as pitch and rhythm is also important. Not all research on prosody has been based on auditory techniques: some pioneering work on prosodic features using laboratory instruments was carried out in the 20th century (e.g.

Elizabeth Uldall's work using synthesized intonation contours,[12] Dennis Fry's work on stress perception[13] or Daniel Jones's early work on analyzing pitch contours by means of manually operating the pickup arm of a gramophone to listen repeatedly to individual syllables, checking where necessary against a tuning fork),.[14] However, the great majority of work on prosody has been based on auditory analysis until the recent arrival of approaches explicitly based on computer analysis of the acoustic signal, such as ToBI, INTSINT or the IPO system.[15]

See also

References

  1. .
  2. ^ Ello. "Auditory Phonetics". ello.uos.de. Retrieved 11 November 2020.
  3. ^ Mack, M. (2004) "Auditory phonetics" in Malmkjaer, K. (ed) The Linguistics Encyclopedia, Routledge, p.51
  4. .
  5. .
  6. ^ Elman, J. and McClelland, J. (1982) "Exploiting lawful variability in the speech wave" in J.S. Perkell and D. Klatt Invariance and Variability in Speech Processes, Erlbaum, pp. 360-380.
  7. ^ Turner, Paul; Verhoeven, Jo (2011). "Intrinsic vowel pitch: a gradient feature of vowel systems?" (PDF). Proceedings of the International Congress of Phonetic Sciences: 2038–2041. Retrieved 13 November 2020.
  8. ^ Labov, William (1966). The Social Stratification of English in New York City. Washington, D.C.: Center for Applied Linguistics.
  9. ^ Pike, Kenneth (1943). Phonetics. University of Michigan. p. 31.
  10. .
  11. ^ Ladefoged, Peter (1967). Three Areas of Experimental Phonetics. Oxford. pp. 74–5.
  12. ^ Elizabeth Uldall (1964) "Dimensions of meaning in intonation" in Abercrombie, D. et al (eds) In Honour of Daniel Jones, Longman
  13. .
  14. ^ Jones, Daniel (1909). Intonation Curves. Leipzig: Teubner.
  15. ^ 't Hart, J.; Collier, R.; Cohen, A. (1990). A perceptual study of intonation. Cambridge.