Hidden Markov model
A hidden Markov model (HMM) is a
Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech,[1] handwriting, gesture recognition,[2] part-of-speech tagging, musical score following,[3] partial discharges[4] and bioinformatics.[5][6]
Definition
Let and be discrete-time stochastic processes and . The pair is a hidden Markov model if
- is a Markov processwhose behavior is not directly observable ("hidden");
- for every and every Borel set .
Let and be continuous-time stochastic processes. The pair is a hidden Markov model if
- is a Markov process whose behavior is not directly observable ("hidden");
- ,
- for every every Borel set and every family of Borel sets
Terminology
The states of the process (resp. are called hidden states, and (resp. is called emission probability or output probability.
Examples
In its discrete form, a hidden Markov process can be visualized as a generalization of the
The Markov process itself cannot be observed, only the sequence of labeled balls, thus this arrangement is called a "hidden Markov process". This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, e.g. y1, y2 and y3 on the conveyor belt, the observer still cannot be sure which urn (i.e., at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
Weather guessing game
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in
states = ("Rainy", "Sunny")
observations = ("walk", "shop", "clean")
start_probability = {"Rainy": 0.6, "Sunny": 0.4}
transition_probability = {
"Rainy": {"Rainy": 0.7, "Sunny": 0.3},
"Sunny": {"Rainy": 0.4, "Sunny": 0.6},
}
emission_probability = {
"Rainy": {"walk": 0.1, "shop": 0.4, "clean": 0.5},
"Sunny": {"walk": 0.6, "shop": 0.3, "clean": 0.1},
}
In this piece of code, start_probability
represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Rainy': 0.57, 'Sunny': 0.43}
. The transition_probability
represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability
represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.
A similar example is further elaborated in the Viterbi algorithm page.
Structural architecture
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1, x2, x3 }). The random variable y(t) is the observation at time t (with y(t) ∈ { y1, y2, y3, y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1); the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) only depends on the value of the hidden variable x(t) (both at time t).
In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a
The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time , for a total of transition probabilities. Note that the set of transition probabilities for transitions from any given state must sum to 1. Thus, the matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of transition parameters.
In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be separate parameters, for a total of emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary
Inference
Several inference problems are associated with hidden Markov models, as outlined below.
Probability of an observed sequence
The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
The probability of observing a sequence
of length L is given by
where the sum runs over all possible hidden-node sequences
Applying the principle of dynamic programming, this problem, too, can be handled efficiently using the forward algorithm.
Probability of the latent variables
A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations
Filtering
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute . This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end.
This problem can be handled efficiently using the forward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determine .
Smoothing
This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute for some . From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time k in the past, relative to time t.
The
Most likely explanation
The task, unlike the previous two, asks about the
This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the Viterbi algorithm.
Statistical significance
For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence?[8] When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence.
Learning
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the
If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability.[9] Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g.[10] Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
Applications
HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include:
- Computational finance[11][12]
- Single-molecule kinetic analysis[13]
- Neuroscience[14][15]
- Cryptanalysis
- Speech recognition, including Siri[16]
- Speech synthesis
- Part-of-speech tagging
- Document separation in scanning solutions
- Machine translation
- Partial discharge
- Gene prediction
- Handwriting recognition[17]
- Alignment of bio-sequences
- Time series analysis
- Activity recognition
- Protein folding[18]
- Sequence classification[19]
- Metamorphic virus detection[20]
- Sequence motif discovery (DNA and proteins)[21]
- DNA hybridization kinetics[22][23]
- Chromatin state discovery[24]
- Transportation forecasting[25]
- Solar irradiance variability [26][27][28]
History
Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s.[29][30][31][32][33] One of the first applications of HMMs was speech recognition, starting in the mid-1970s.[34][35][36][37]
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences,[38] in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[39]
Extensions
This article may be too long to read and navigate comfortably. (March 2023) |
General state spaces
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a
Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model[40] and the learnability limits are still under exploration.[41]
Bayesian modeling of the transitions probabilities
Hidden Markov models are
An extension of the previously described hidden Markov models with Dirichlet priors uses a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short. It was originally described under the name "Infinite Hidden Markov Model"[42] and was further formalized in "Hierarchical Dirichlet Processes".[43]
Discriminative approach
A different type of extension uses a
A variant of the previously described discriminative model is the linear-chain conditional random field. This uses an undirected graphical model (aka Markov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called label bias problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
Other extensions
Yet another variant is the factorial hidden Markov model, which allows for a single observation to be conditioned on the corresponding hidden variables of a set of independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with states (assuming there are states for each chain), and therefore, learning in such a model is difficult: for a sequence of length , a straightforward Viterbi algorithm has complexity . To find an exact solution, a junction tree algorithm could be used, but it results in an complexity. In practice, approximate techniques, such as variational approaches, could be used.[44]
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an running time, for adjacent states and total observations (i.e. a length- Markov chain). This extension has been widely used in bioinformatics, in the modeling of DNA sequences.
Another recent extension is the triplet Markov model,[45] in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the theory of evidence and the triplet Markov models[46] and which allows to fuse data in Markovian context[47] and to model nonstationary data.[48][49] Note that alternative multi-stream data fusion strategies have also been proposed in the recent literature, e.g.[50]
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012.[51] It consists in employing a small recurrent neural network (RNN), specifically a reservoir network,[52] to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, we eventually obtain a nonstationary HMM the transition probabilities of which evolve over time in a manner that is inferred from the data itself, as opposed to some unrealistic ad-hoc model of temporal evolution.
In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions.[53][54] Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law.[55][56] This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications.
The model suitable in the context of longitudinal data is named latent Markov model.[57] The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in[58]
Measure theory
Given a Markov transition matrix and an invariant distribution on the states, we can impose a probability measure on the set of subshifts. For example, consider the Markov chain given on the left on the states , with invariant distribution . If we "forget" the distinction between , we project this space of subshifts on into another space of subshifts on , and this projection also projects the probability measure down to a probability measure on the subshifts on .
The curious thing is that the probability measure on the subshifts on is not created by a Markov chain on , not even multiple orders. Intuitively, this is because if one observes a long sequence of , then one would become increasingly sure that the , meaning that the observable part of the system can be affected by something infinitely in the past.[59][60]
Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (Example 2.6 [60]).
See also
- Andrey Markov
- Baum–Welch algorithm
- Bayesian inference
- Bayesian programming
- Richard James Boys
- Conditional random field
- Estimation theory
- HHpred / HHsearchfree server and software for protein sequence searching
- HMMER, a free hidden Markov model program for protein sequence analysis
- Hidden Bernoulli model
- Hidden semi-Markov model
- Hierarchical hidden Markov model
- Layered hidden Markov model
- Sequential dynamical system
- Stochastic context-free grammar
- Time SeriesAnalysis
- Variable-order Markov model
- Viterbi algorithm
References
- ^ "Google Scholar".
- ^ Thad Starner, Alex Pentland. Real-Time American Sign Language Visual Recognition From Video Using Hidden Markov Models. Master's Thesis, MIT, Feb 1995, Program in Media Arts
- ^ B. Pardo and W. Birmingham. Modeling Form for On-line Following of Musical Performances Archived 2012-02-06 at the Wayback Machine. AAAI-05 Proc., July 2005.
- ^ Satish L, Gururaj BI (April 2003). "Use of hidden Markov models for partial discharge pattern classification". IEEE Transactions on Dielectrics and Electrical Insulation.
- PMID 14704198.
- PMID 22373907.
- PMID 19589158.
- ^ Sipos, I. Róbert. Parallel stratified MCMC sampling of AR-HMMs for stochastic time series prediction. In: Proceedings, 4th Stochastic Modeling Techniques and Data Analysis International Conference with Demographics Workshop (SMTDA2016), pp. 295-306. Valletta, 2016. PDF
- doi:10.1016/j.patcog.2010.09.001. Archived from the original(PDF) on 2011-04-01. Retrieved 2018-03-11.
- S2CID 61882456.
- .
- .
- PMID 35302683.
- S2CID 235703641.
- ISBN 9780465061921.
- ^ Kundu, Amlan, Yang He, and Paramvir Bahl. "Recognition of handwritten word: first and second order hidden Markov model based approach[dead link]." Pattern recognition 22.3 (1989): 283-297.
- S2CID 5502662.
- ^ Blasiak, S.; Rangwala, H. (2011). "A Hidden Markov Model Variant for Sequence Classification". IJCAI Proceedings-International Joint Conference on Artificial Intelligence. 22: 1192.
- S2CID 8116065.
- PMID 23814189.
- S2CID 96448257.
- S2CID 84841635.
- ^ "ChromHMM: Chromatin state discovery and characterization". compbio.mit.edu. Retrieved 2018-08-01.
- arXiv:1707.09133 [stat.AP].
- .
- S2CID 125867684.
- S2CID 125538244.
- .
- Zbl 0157.11101.
- .
- Zbl 0188.49603.
- ^ Baum, L.E. (1972). "An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of a Markov Process". Inequalities. 3: 1–8.
- .
- .
- ISBN 978-0-7486-0162-2.
- ISBN 978-0-13-022616-7.
- OCLC 593254083.
- ISSN 1573-1375.
- ISSN 0018-9448.
- ^ Beal, Matthew J., Zoubin Ghahramani, and Carl Edward Rasmussen. "The infinite hidden Markov model." Advances in neural information processing systems 14 (2002): 577-584.
- ^ Teh, Yee Whye, et al. "Hierarchical dirichlet processes." Journal of the American Statistical Association 101.476 (2006).
- .
- .
- .
- ^ Boudaren et al. Archived 2014-03-11 at the Wayback Machine, M. Y. Boudaren, E. Monfrini, W. Pieczynski, and A. Aissani, Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context, EURASIP Journal on Advances in Signal Processing, No. 134, 2012.
- ^ Lanchantin et al., P. Lanchantin and W. Pieczynski, Unsupervised restoration of hidden non stationary Markov chain using evidential priors, IEEE Transactions on Signal Processing, Vol. 53, No. 8, pp. 3091-3098, 2005.
- ^ Boudaren et al., M. Y. Boudaren, E. Monfrini, and W. Pieczynski, Unsupervised segmentation of random discrete data hidden with switching noise distributions, IEEE Signal Processing Letters, Vol. 19, No. 10, pp. 619-622, October 2012.
- ^ Sotirios P. Chatzis, Dimitrios Kosmopoulos, "Visual Workflow Recognition Using a Variational Bayesian Treatment of Multistream Fused Hidden Markov Models," IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1076-1086, July 2012.
- hdl:10044/1/12611.
- ^ M. Lukosevicius, H. Jaeger (2009) Reservoir computing approaches to recurrent neural network training, Computer Science Review 3: 127–149.
- ^ Azeraf, E., Monfrini, E., & Pieczynski, W. (2023). Equivalence between LC-CRF and HMM, and Discriminative Computing of HMM-Based MPM and MAP. Algorithms, 16(3), 173.
- ^ Azeraf, E., Monfrini, E., Vignon, E., & Pieczynski, W. (2020). Hidden markov chains, entropic forward-backward, and part-of-speech tagging. arXiv preprint arXiv:2005.10629.
- ^ Azeraf, E., Monfrini, E., & Pieczynski, W. (2022). Deriving discriminative classifiers from generative models. arXiv preprint arXiv:2201.00844.
- ^ Ng, A., & Jordan, M. (2001). On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in neural information processing systems, 14.
- ^ Wiggins, L. M. (1973). Panel Analysis: Latent Probability Models for Attitude and Behaviour Processes. Amsterdam: Elsevier.
- ISBN 978-14-3981-708-7.
- ^ Sofic Measures: Characterizations of Hidden Markov Chains by Linear Algebra, Formal Languages, and Symbolic Dynamics - Karl Petersen, Mathematics 210, Spring 2006, University of North Carolina at Chapel Hill
- ^ arXiv:0907.1858
External links
Concepts
- Teif, V. B.; Rippe, K. (2010). "Statistical–mechanical lattice models for protein–DNA binding in chromatin". J. Phys.: Condens. Matter. 22 (41): 414105. S2CID 103345.
- A Revealing Introduction to Hidden Markov Models by Mark Stamp, San Jose State University.
- Fitting HMM's with expectation-maximization – complete derivation
- A step-by-step tutorial on HMMs Archived 2017-08-13 at the Wayback Machine (University of Leeds)
- Hidden Markov Models (an exposition using basic mathematics)
- Hidden Markov Models (by Narada Warakagoda)
- Hidden Markov Models: Fundamentals and Applications Part 1, Part 2 (by V. Petrushin)
- Lecture on a Spreadsheet by Jason Eisner, Video and interactive spreadsheet