Probability diagram Model 2: Hidden Markov (2)

Source: Internet
Author: User
Tags abs

In the previous section, we introduced the probability calculation of hidden Markov, and in this section, we introduce the problem of hidden Markov learning. Before we introduce the learning problem, let's use Python to implement several important concepts.
Note that the following code is based on the original formula in the Hangyuan Li Statistical learning method. The formula here does not take the logarithm. So if you generate more than 300 of the data, you'll see a noticeable overflow problem. So the code below is a toy. We give this code first, and then we give the code after the logarithm.

#!/usr/bin/env python #-*-coding:utf-8-*-"" "@author: Xiangguosun @contact: sunxiangguodut@qq.com @file: hmm.py @time
    : 2017/3/27 12:40 @software: Pycharm "" "Import NumPy as NP def simulate (model,t): A = model[0] B = model[1] Pi = model[2] def draw_from (probs): Return Np.where (np.random.multinomial (1, probs) = = 1) [0][0] Observat ions = Np.zeros (T, dtype=int) states = Np.zeros (T, dtype=int) states[0] = Draw_from (pi) observations[0] = Draw _from (B[states[0],:]) for T in range (1, T): states[t] = Draw_from (A[states[t-1],:]) observations[t
    ] = Draw_from (B[states[t],:]) pp = Np.unique (states) return observations,pp def forward_prob (model, Observe): ' Markov forward algorithm ' A, B, pi = model T = observe.size Alpha = pi*b[:, observe[0]] Alpha_all = n P.copy (Alpha). Reshape ((1,-1)) # print "(1) calculates the initial value alpha_1 (i):", Alpha # # print "(2) recursion ..." for the T in Xrang E (0, T-1): Alpha = Alpha.dot (A) *b[:, observe[t+1]] Alpha_all = Np.append (Alpha_all, Alpha.reshape ((1, 1)), 0) # print "t= ", T + 1," Alpha_ ", T + 1," (i): ", Alpha # Print" (3) terminated. Alpha_ ", T," (i): ", Alpha # print" Output prob: ", Alpha.sum () return Alpha.sum (), Alpha_all def backward_prob (model,  observe,states): ' Markov back Algorithm ' A, B, pi = model M = states.size T = observe.size Beta

    = Np.ones ((M,)) # beta_t Beta_all = np.copy (Beta). Reshape ((1,-1)) # print "(1) Calculate initial value Beta_", T, "(i):", beta
        # print "(2) recursion ..." for T in Xrange (T-2,-1,-1): # t=t-2,..., 0 beta = A.dot (b[:, Observe[t + 1]] * beta) Beta_all = Np.append (Beta_all, Beta.reshape ((1,-1)), 0) # print "t=", T + 1, "Beta_", T + 1, "(i):", be Ta beta_all = beta_all[::-1,:] # print "(3) terminated. Alpha_ ", 1," (i): ", beta prob = pi.dot (Beta * b[:, observe[0]]) # print" Output prob: ", prob return Prob,beta _all def getpar (model, ObsErve, states): ' Get Alpha_all and beta_all ' T = observe.size prob_1, Alpha_all = Forward_prob (model, Observe)    prob_2, Beta_all = Backward_prob (model, Observe, states) #print "Alpha_all:", Alpha_all #print "Beta_all:
    ", Beta_all" calculates the gamma and Xi matrices based on the Alpha_all and Beta_all "#" # to calculate the light matrix denominator = (Alpha_all * beta_all). SUM (1)
    #print Denominator #print Alpha_all * Beta_all gamma = Alpha_all * Beta_all/denominator.reshape ((-1, 1)) # print "Gamma is:" # print Gamma # Gamma line represents moment T, column indicates state i # # calculation XI Matrix item_t = [] for T in xrange (0, T-1) : Item_t.append (((alpha_all[t] * (A.T)).  T) * (b[:, Observe[t + 1]] * beta_all[t + 1])) proout_t = Np.array (item_t) denomin = proout_t.sum (1). SUM (1) XI = proout_t/(Denomin.reshape ( -1, 1, 1)) # XI Axi0 means time t,axi1 and 2 represents the corresponding I,j # print "Xi is:" # Print XI ' according to GA MMA and Xi calculate several important expectations "' # print proout_t itga = gamma.sum (0) It_1ga = gamma[0:-1,:].sum(0) Ijt_1xi = xi.sum (0) Return Gamma, Xi, ITGA, It_1ga, Ijt_1xi def baum_welch (observe,states,modell,epsion=0.0 : Model = Modell A,B,PI = Model M = b.shape[1] done = False and not Done:gamma, Xi, IT GA, it_1ga, ijt_1xi = Getpar (model,observe,states) new_a = Ijt_1xi/it_1ga BB = [] for k in xrange (0,m): I = np.where (k = = Observe, 1,0) gamma_temp = (gamma. T) *i). T bb.append (gamma_temp.sum (0)/itga) new_b = Np.array (BB). T #print new_b new_pi = gamma[0] If Np.max (ABS (NEW_A-A)) <epsion and \ Np.max (ABS (New_b-b)) 
        <epsion and \ Np.max (ABS (NEW_PI-PI)) <epsion:done = True A = new_a B = new_b PI = New_pi model = (A,B,PI) return model if __name__ = = ' __main__ ': #这是我们的实际模型参数: A = n P.array ([[0.5, 0.2, 0.3], [0.3, 0.5, 0.2], [0.2, 0.3, 0.5]]) B = Np.array ([[0.5, 0.5], [0.4, 0.6], [0.7, 0.3]]) PI = Np.array ([0.2, 0 .4, 0.4]) model = (A, B, pi) #下面我们用实际的模型参数来生成测试数据 Observe, states = Simulate (model,100) print "Observe \ n ", Observe print" states \ n ", states #模型初始化 IniA = Np.array ([[0.3, 0.3, 0.3], [0.3, 0.3, 0.3 ], [0.3, 0.3, 0.3]]) inib = Np.array ([[0.5, 0.5], [0.5, 0.5], [0 .5, 0.5]]) Inipi = Np.array ([0.3, 0.3, 0.3]) Inimodel = (IniA, inib, inipi) model = Baum_welch (Observe,states,  inimodel,0.001) print "A:" Print model[0] print "B:" Print model[1] print "PI:" Print model[2]


from the experimental results, the estimation of a matrix is the most accurate, the B matrix is slightly poor, and pi is the most inaccurate. For the parameter estimation of Hidden Markov, the non-supervised Baum-welch method is not particularly good. More sample data is required.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.