Speech processing

Source: Wikipedia, the free encyclopedia.

Speech processing is the study of

speaker diarization, speech enhancement, speaker recognition, etc.[1]

History

Early attempts at speech processing and recognition were primarily focused on understanding a handful of simple phonetic elements such as vowels. In 1952, three researchers at Bell Labs, Stephen. Balashek, R. Biddulph, and K. H. Davis, developed a system that could recognize digits spoken by a single speaker.[2] Pioneering works in field of speech recognition using analysis of its spectrum were reported in the 1940s.[3]

speech synthesizer chips, such as the Texas Instruments LPC Speech Chips used in the Speak & Spell toys from 1978.[5]

One of the first commercially available speech recognition products was Dragon Dictate, released in 1990. In 1992, technology developed by Lawrence Rabiner and others at Bell Labs was used by AT&T in their Voice Recognition Call Processing service to route calls without a human operator. By this point, the vocabulary of these systems was larger than the average human vocabulary.[6]

By the early 2000s, the dominant speech processing strategy started to shift away from

neural networks and deep learning.[citation needed
]

Techniques

Dynamic time warping

Dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. In general, DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restriction and rules. The optimal match is denoted by the match that satisfies all the restrictions and the rules and that has the minimal cost, where the cost is computed as the sum of absolute differences, for each matched pair of indices, between their values.[citation needed]

Hidden Markov models

A hidden Markov model can be represented as the simplest dynamic Bayesian network. The goal of the algorithm is to estimate a hidden variable x(t) given a list of observations y(t). By applying the Markov property, the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1). Similarly, the value of the observed variable y(t) only depends on the value of the hidden variable x(t) (both at time t).[citation needed]

Artificial neural networks

]

Phase-aware processing

Phase is usually supposed to be random uniform variable and thus useless. This is due wrapping of phase:

arctangent
function is not continuous due to periodical jumps on . After phase unwrapping (see,
[8] Chapter 2.3; Instantaneous phase and frequency), it can be expressed as:[7][9] , where is linear phase ( is temporal shift at each frame of analysis), is phase contribution of the vocal tract and phase source.[9] Obtained phase estimations can be used for noise reduction: temporal smoothing of instantaneous phase [10] and its derivatives by time (instantaneous frequency) and frequency (group delay),[11] smoothing of phase across frequency.[11] Joined amplitude and phase estimators can recover speech more accurately basing on assumption of von Mises distribution of phase.[9]

Applications

See also

References

  1. ].
  2. ^ Myasnikov, L. L.; Myasnikova, Ye. N. (1970). Automatic recognition of sound pattern (in Russian). Leningrad: Energiya.
  3. ^
    ISSN 1932-8346
    .
  4. ^ "VC&G - VC&G Interview: 30 Years Later, Richard Wiggins Talks Speak & Spell Development".
  5. S2CID 6175701
    .
  6. ^ . Retrieved 2017-12-03.
  7. .
  8. ^ a b c Kulmer, Josef; Mowlaee, Pejman (April 2015). "Harmonic phase estimation in single-channel speech enhancement using von Mises distribution and prior SNR". Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE. pp. 5063–5067.
  9. S2CID 15503015
    . Retrieved 2017-12-03.
  10. ^ . Retrieved 2017-12-03.