Hidden Markov model (HMM)

Source: Internet
Author: User

About HMM

Hmm is used to study the non-deterministic generation pattern, hmm is a time-independent model (to be improved), and the N-order hmm model refers to the next state only related to the first n, usually only the first-order HMM model (to be improved). The implicit parameters of the process are determined from the observable parameters, which are then used for further analysis, such as pattern recognition.

Below you can use a case to explain the HMM model.

Suppose there are three kinds of dice, respectively, marked with 123456 of the cube, marked with 1234 of the Mitsubishi Cone, labeled 12345678 of the eight-face body. They are recorded as D6, D4, D8, assuming that we randomly pick a dice from three dice in the probability of 1/3, then we can throw the dice at will, assuming that one time the dice result is 136527481. This result is called the visible state chain, where the sequence of the dice is an implied state chain, and hmm generally refers to the implied state chain. There is a transfer probability between the implied state chain, in which case the implied state chain may be d4d6d6d8d4d8d6d8d4, or it may be a chain of other hidden states, which can enumerate many kinds. There is no transfer probability between visible states, but there is a transfer probability between the hidden layer and the visible layer, for example, the probability of throwing 1/2/3/4 in D4 is 1/4. Of course, you can also customize the two transfer probabilities.

When applying the HMM model, it is often missing part of the information, know how many dice, each kind of dice is what, but do not know the dice roll out of the sequence; sometimes just see the results of many times the dice, the rest of what do not know. and HMM model related algorithms are divided into three categories, respectively, to solve three kinds of problems:

Knowing how many dice there are (the number of implied states), what each dice is (conversion probability), and the result of the roll of the dice (visible status chain), I want to know what kind of dice are thrown each time (implied status chain).

Knowing how many dice there are (the number of implied states), what each dice is (conversion probability), and the result (visible status chain) that is thrown from the dice, I want to know the probability of the result being thrown.

Knowing that there are several dice (the number of implied states), do not know what each dice is (conversion probability), observe the results of many times to roll the dice (visible status chain), I want to reverse the introduction of each kind of dice (conversion probability).

MATLAB Sample Program
%%  Hidden Markov model%% 2015-9-16,zzw%%  problem background%  Consider two dice and two coins, red and green, red and green coins. %  the probability that the red child 1~6 appears is the same%  the green child has 12 faces, of which 1 appears on 7 faces, the remaining five faces are labeled 2~6%  red coins have a positive upward probability of 0.9, the reverse probability is 0.1%  Green coin appears positive upward probability is 0.95, the opposite probability for 0.05%%  game rule%  to throw the red boson, writes down the numeral%  throws the red coin, if the positive upward, then continues to throw the red son, conversely, then throws the green son%  continues above the process to be   Generate Data%  state transition probability matrix, the first row represents the red coin, the second row represents the green coin t=[0.9 0.1;0.05 0.95];%  two dice each number of probability, the first row represents the red boson, The second row represents the green boson E=[1/6 1/6 1/6 1/6 1/6 1/6;7/12 1/12 1/12 1/12 1/12  1/12];%  randomly generates a set of dice sequences and state sequences [Seq,states]=hmmgenerate (1000,t,e);%%  uses Viterbi algorithm to calculate the state sequence likelystates= Hmmviterbi (seq,t,e);%  calculates the correct rate rate=sum (states==likelystates)/1000;%%  to speculate on the transfer probability and boson probabilities by sequence and state [t_est,e_est]= Hmmestimate (seq,states);%%  If the status states is not known, it is possible to estimate T and E only through SEQ and an initial t_est,e_est t_guess=[0.85 0.15;0.1  0.9]; e_guess=[0.17 0.16 0.17 0.16 0.17 0.17;0.6 0.08 0.08 0.08 0.08  0.08]; [T_est2,e_est2]=hmmtrain (Seq,t_guess,e_GUESS);%%  estimates a priori conditional distribution, which is the probability pstates=hmmdecode (seq,t,e) that produces this result; 


Hidden Markov model (HMM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.