Main Page | See live article | Alphabetical index

Hidden Markov model

A hidden Markov model (HMM) is a statistical model where the system being modelled is assumed to be a Markov process with unknown parameters, and the challenge is to determine the hidden parameters, from the observable parameters, based on this assumption. The extracted model parameters can then be used to perform further analysis, for example for pattern recognition applications.

The notions of observable and hidden are similar to Plato's notions of shadows and forms in the allegory of the cave. The allegory claims that perceived reality is but the shadow thrown into the world of experience of a true reality which is inaccessible to direct sensory experience. `Forms' in the true reality contain the essence of a class of object which can be experienced only incompletely in perceived reality. This analogy is particularly strong when modelling parts of speech and sentences, and other entities which have a strongly defined semantic meaning independent of the myriad of possible representations in the observable sequence.

In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. A hidden Markov model adds outputs: each state has a probability distribution over the possible output tokens. Therefore, looking at a sequence of tokens generated by an HMM does not directly indicate the sequence of states.

Table of contents
1 Example (H)MM
2 Using Markov Models
3 See also
4 External links

Example (H)MM

x - States of the markov model (hidden in HMM)
a - Transition probabilities
b - Output probabilities
y - Output tokens

Using Markov Models

There are 3 canonical problems to solve with HMMs:

Applications of hidden Markov models

See also

External links