Viterbi Algorithm in Hidden Markov Model (Dynamic Planning)

Source: Internet
Author: User

This article briefly describes the Viterbi algorithm-the name I have heard of a year ago. It didn't take a while to study it until two weeks ago. I will make a brief review here. Let's briefly describe it with one sentence: Give an observation sequence O1, O2, O3 ..., We hope to find the hidden state sequence S1, S2, S3,… behind the observed sequence ,...; Viterbi is named after its inventor. It is such an algorithm that uses dynamic planning to find the hidden state sequence (called Viterbi path) with the highest probability.

Here we need to copy a book on the hidden Markov sequence (hmm, Hidden Markov Model) to explain the sequence of observation and hidden state.

Starting with the simplest discrete MARKOV process, we know that a Markov Random Process has the following properties: at any time, the probability of transferring from the current status to the next status is irrelevant to those statuses before the current status. Therefore, we can use a state transfer probability matrix to describe it. Suppose we have n discrete States S1, S2 ,... SN. we can construct a matrix A. The element AIJ in the matrix indicates the probability of migrating from the current Si to the SJ state at the next time.

However, in many cases, the state in the Markov model cannot be observed. For example, the container and the color Ball Model: There are several containers, each container puts the color balls in a known proportion (in this way, after the container is selected, we can use probability to predict the possibility of various colored balls.) In this experiment, the experimenter fetches the colored balls from the container-first selects a container and then captures a certain ball, only show the color of the ball to the observer. In this way, the color of the ball retrieved each time can be observed, that is, O1, O2 ,..., However, each time you select a container that is not exposed to the observer, the container sequence forms a hidden state sequence S1, S2 ,... Sn. This is a typical experiment that can be described using HMM.

Hmm has several important tasks, one of which is to predict the most likely hidden sequence by observing the sequence. In the above example, we find the container sequence most likely to be selected in our experiment. Viterbi is an algorithm used to solve this problem. The other two tasks of HMM are: a) a HMM is given to calculate the likelihood of an observed sequence; B) an observed sequence is known, and the HMM parameter is not fixed, how to optimize these parameters to maximize the probability of appearance of the observed sequence. To solve the previous problem, we can use a Forward Algorithm Similar to the viberbi structure (which is actually merged into one below), while the latter can use the Baum-Welch/EM algorithm for iterative approximation.

Copy an example from the wiki to illustrate the Viterbi algorithm.

Assume that you have a friend outside of China. You can call to learn about his activities every day. He only performs one of three activities every day-walk, shop, and clean. The probability of a friend's activity is related to the local climate. Here, we only consider two kinds of weather: rainy and sunny. We know that the relationship between weather and sports is as follows:

Rainy

Sunny

Walk

0.1

0.6

Shop

0.4

0.3

Clean

0.5

0.1

For example, if you go out for a walk on a rainy day, the probability is 0.1.

Before the weather, the relationship between the conversion is as follows (from row to column)

Rainy

Sunny

Rainy

0.7

0.3

Sunny

0.4

0.6

For example, from today to Sunday, and tomorrow to rain, the possibility is 0.4.

At the same time, to solve the problem, we assume that the first day of the call has a 0.6 probability of rainy and sunny. OK. The problem is: If you find that your friend's activity is: Walk-> shop-> clean; for three consecutive days, how can we determine the weather in your friend's past few days?

The Python pseudocode to solve this problem is as follows:

Def forward_viterbi (OBS, States, start_p, trans_p, emit_p ):

T = {}

For state in states:

# Prob. V. Path v. prob.

T [State] = (start_p [State], [State], start_p [State])

For output in OBS:

U = {}

For next_state in states:

Total = 0

Argmax = none

Valmax = 0

For source_state in states:

(Prob, v_path, v_prob) = T [source_state]

P = emit_p [source_state] [Output] * trans_p [source_state] [next_state]

Prob * = P

V_prob * = P

Total + = prob

If v_prob> valmax:

Argmax = v_path + [next_state]

Valmax = v_prob

U [next_state] = (total, argmax, valmax)

T = u

# Apply sum/Max to the final states:

Total = 0

Argmax = none

Valmax = 0

For state in states:

(Prob, v_path, v_prob) = T [State]

Total + = prob

If v_prob> valmax:

Argmax = v_path

Valmax = v_prob

Return (total, argmax, valmax)

Notes:

1. the algorithm records a triple for each State: (prob, v_path, v_prob), where prob is all paths from the starting state to the current State (not just the most likely Viterbi path) (As an algorithm product, it can output the total probability of appearance of an observed sequence under a given Hmm, that is, the output of the Forward Algorithm ), v_path is the Viterbi path from the starting state to the current state, and v_prob is the probability of this path.

2. initialize T (T is a map that maps each possible state to the three tuples mentioned above) at the beginning of the algorithm)

3. for each active y, consider the next possible state next_state and re-calculate the change in the probability of transition from the current state in t to next_state. The transition mainly considers the joint probability of weather transfer (TP [source_state] [next_state]) and activity (EP [source_state] [Output]) in the weather. After all the subsequent statuses are considered, find the optimal Viterbi path from T, that is, the Viterbi path with the highest probability, that is, the Code U [next_state] = (total, argmax, valmax ).

4. The algorithm also summarizes various situations in T, sums total, and selects one of them as the optimal Viterbi path.

5. The algorithm outputs four weather states. This is because, when calculating the probability of the third day, we should consider the weather transition to the next day.

6. program output helps you understand this process:

Observation = walk

Next_state = sunny

State = sunny

P = 0.36

Triple = (0.144, Sunny->, 0.144)

State = rainy

P = 0.03

Triple = (0.018, rainy->, 0.018)

Update U [sunny] = (0.162, Sunny->, 0.144)

Next_state = rainy

State = sunny

P = 0.24

Triple = (0.096, Sunny->, 0.096)

State = rainy

P = 0.07

Triple = (0.042, rainy->, 0.042)

Update U [rainy] = (0.138, Sunny-> rainy->, 0.096)

Observation = shop

Next_state = sunny

State = sunny

P = 0.18

Triple = (0.02916, Sunny->, 0.02592)

State = rainy

P = 0.12

Triple = (0.01656, Sunny-> rainy->, 0.01152)

Update U [sunny] = (0.04572, Sunny->, 0.02592)

Next_state = rainy

State = sunny

P = 0.12

Triple = (0.01944, Sunny->, 0.01728)

State = rainy

P = 0.28

Triple = (0.03864, Sunny-> rainy->, 0.02688)

Update U [rainy] = (0.05808, Sunny-> rainy->, 0.02688)

Observation = clean

Next_state = sunny

State = sunny

P = 0.06

Triple = (0.0027432, Sunny->, 0.0015552)

State = rainy

P = 0.15

Triple = (0.008712, Sunny-> rainy->, 0.004032)

Update U [sunny] = (0.0114552, Sunny-> rainy-> Sunny->, 0.004032)

Next_state = rainy

State = sunny

P = 0.04

Triple = (0.0018288, Sunny->, 0.0010368)

State = rainy

P = 0.35

Triple = (0.020328, Sunny-> rainy->, 0.009408)

Update U [rainy] = (0.0221568, Sunny-> rainy->, 0.009408)

Final triple = (0.033612, Sunny-> rainy->, 0.009408)

Therefore, the final result is that the most likely weather condition for a friend over the past few days is sunny-> rainy, which has a probability of 0.009408. Another incidental conclusion of our algorithm is that the activity sequence of our observed friends over the past few days: the total probability that walk-> shop-> clean appears under our hidden Markov model is 0.033612.

References

1. http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf

2. http://en.wikipedia.org/wiki/Viterbi_algorithm

3. http://googlechinablog.com/2006/04/blog-post_17.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.