[Algorithm learning] Hidden Markov Model Learning (translated from wiki)

Source: Internet
Author: User

Author: gnuhpc
Source: http://www.cnblogs.com/gnuhpc/

Starting with the simplest discrete MARKOV process, we know that a Markov Random Process has the following properties: at any time, the probability of transferring from the current status to the next status is irrelevant to those statuses before the current status. Therefore, we can describe the state transition probability matrix. Suppose we have n discrete States S1, S2 ,... SN. we can construct a matrix A. The element AIJ in the matrix indicates the probability of migrating from the current Si to the SJ state at the next time.
However, in many cases, the state in the Markov model cannot be observed. For example, the container and the color Ball Model: There are several containers, each container puts the color balls in a known proportion (in this way, after the container is selected, we can use probability to predict the possibility of various colored balls.) In this experiment, the experimenter fetches the colored balls from the container-first selects a container and then captures a certain ball, only show the color of the ball to the observer. In this case, the color of the ball retrieved each time can be observed, that is, O1, O2 ,..., However, each time you select a container that is not exposed to the observer, the container sequence forms a hidden state sequence S1, S2 ,... Sn. This is a typical experiment that can be described using HMM.
Hmm has several important tasks, one of which is to predict the most likely hidden sequence by observing the sequence. An optimal criterion needs to be determined. In the above example, we find the container sequence most likely to be selected in our experiment. Viterbi is used to solve this problem.Algorithm. Another two tasks of HMM are as follows: a) a HMM is given to calculate the likelihood of an observed sequence; B) an observed sequence is known, and the HMM parameter is not fixed, how to optimize these parameters to maximize the probability of appearance of the observed sequence. Solving the previous problem is caused by State concealment, which is much more complex than Markov chains. It can be solved by using a forward algorithm that is very similar to the viberbi structure (which is actually merged into one below ), you must select a Better hmm when providing two hmm models. The latter can use the Baum-Welch/EM algorithm for iterative approximation.

Author: gnuhpc
Source: http://www.cnblogs.com/gnuhpc/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.