Hdu 4865 Peter's Hoby (Hidden Markov Model dp), hdu Marco

Peter's HobbyTime Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Total Submission (s): 292 Accepted Submission (s): 132

Problem DescriptionRecently, Peter likes to measure the humidity of leaves. he recorded a leaf humidity every day. there are four types of leaves wetness: Dry, Dryish, Damp and Soggy. as we know, the humidity of leaves is affected by the weather. and there are only three kinds of weather: Sunny, Cloudy and Rainy. for example, under Sunny conditions, the possibility of leaves are dry is 0.6.

Give you the possibility list of weather to the humidity of leaves.

The weather today is affected by the weather yesterday. For example, if yesterday is Sunny, the possibility of today cloudy is 0.375.

The relationship between weather today and weather yesterday is following by table:

Now, Peter has some recodes of the humidity of leaves in N days. and we know the weather conditons on the first day: the probability of sunny is 0.63, the probability of cloudy is 0.17, the probability of rainny is 0.2.cocould you know the weathers of these days most probably like in order?

InputThe first line is T, means the number of cases, then the followings are T cases. for each case:

The first line is a integer n (n <= 50), means the number of days, and the next n lines, each line is a string shows the humidity of leaves (Dry, dryish, Damp, Soggy)

OutputFor each test case, print the case number on its own line. Then is the most possible weather sequence. (We guarantee the data has a unique solution)

Sample Input

13DryDampSoggy

Sample Output

Case #1:SunnyCloudyRainy*Hint*Log is useful.

Question: It's hard to explain clearly. I will give you a sequence of seaweed states, so that you can find the most likely weather sequence. The weather of the previous day can affect the weather of today, the state of seaweed is affected by the weather. (The question should be unable to read how it affects him. For details, refer to the following recommendation link)

Idea: this is actually an application of the Hidden Markov Model (HMM). Find the most likely hidden state sequence based on the sequence of observed states. for the introduction of the Hidden Markov Model, see here: click the Link dp [I] [j] to show the probability that the weather on day I is j, dp [I] [j] = dp [I-1] [k] * wea [k] [j] * lea [j] [num [I], record path, finally, the most possible path is output, based on the probability of the last day.

Code 1: (code without logs)

#include <iostream>#include <cstdio>#include <cstring>#include <algorithm>#include <cmath>#include <string>#include <map>#include <stack>#include <vector>#include <set>#include <queue>#define maxn 105#define MAXN 100005#define mod 1000000009#define INF 0x3f3f3f3f#define pi acos(-1.0)#define eps 1e-10typedef long long ll;using namespace std;int n,m,flag,cnt,tot,test=0;int num[55],pre[55][4];double dp[55][4];char s[50];double lea[3][4]={ {0.6, 0.2, 0.15, 0.05}, {0.25, 0.3, 0.2, 0.25}, {0.05, 0.10, 0.35, 0.50}};double wea[3][3]={ {0.5, 0.375, 0.125}, {0.25, 0.125, 0.625}, {0.25, 0.375, 0.375}};char res[3][15]={ "Sunny","Cloudy","Rainy"};void output(int x,int y){ if(x==0) return ; output(x-1,pre[x][y]); printf("%s\n",res[y]);}void solve(){ int i,j,k,t,id; double ma,tmp; dp[1][0]=0.63*lea[0][num[1]]; dp[1][1]=0.17*lea[1][num[1]]; dp[1][2]=0.2*lea[2][num[1]]; for(i=2;i<=n;i++) { for(j=0;j<3;j++) { ma=-1; for(k=0;k<3;k++) { tmp=dp[i-1][k]*wea[k][j]*lea[j][num[i]]; if(ma<tmp) { ma=tmp; id=k; } } dp[i][j]=ma; pre[i][j]=id; } } ma=-1; for(j=0;j<3;j++) { if(dp[n][j]>ma) { ma=dp[n][j]; id=j; } } printf("Case #%d:\n",++test); output(n,id);}int main(){ int i,j,t; scanf("%d",&t); while(t--) { scanf("%d",&n); for(i=1;i<=n;i++) { scanf("%s",s); if(strcmp(s,"Dry")==0) num[i]=0; else if(strcmp(s,"Dryish")==0) num[i]=1; else if(strcmp(s,"Damp")==0) num[i]=2; else num[i]=3; } solve(); } return 0;}

Code 2: (loss of precision after multiplication, you can use log)

#include <iostream>#include <cstdio>#include <cstring>#include <algorithm>#include <cmath>#include <string>#include <map>#include <stack>#include <vector>#include <set>#include <queue>#define maxn 105#define MAXN 100005#define mod 1000000009#define INF 0x3f3f3f3f#define pi acos(-1.0)#define eps 1e-10typedef long long ll;using namespace std;int n,m,flag,cnt,tot,test=0;int num[55],pre[55][4];double dp[55][4];char s[50];double lea[3][4]={ {0.6, 0.2, 0.15, 0.05}, {0.25, 0.3, 0.2, 0.25}, {0.05, 0.10, 0.35, 0.50}};double wea[3][3]={ {0.5, 0.375, 0.125}, {0.25, 0.125, 0.625}, {0.25, 0.375, 0.375}};char res[3][15]={ "Sunny","Cloudy","Rainy"};void output(int x,int y){ if(x==0) return ; output(x-1,pre[x][y]); printf("%s\n",res[y]);}void solve(){ int i,j,k,t,id; double ma,tmp; dp[1][0]=log(0.63)+lea[0][num[1]]; dp[1][1]=log(0.17)+lea[1][num[1]]; dp[1][2]=log(0.2)+lea[2][num[1]]; for(i=2;i<=n;i++) { for(j=0;j<3;j++) { ma=-INF; for(k=0;k<3;k++) { tmp=dp[i-1][k]+wea[k][j]+lea[j][num[i]]; if(ma<tmp) { ma=tmp; id=k; } } dp[i][j]=ma; pre[i][j]=id; } } ma=-INF; for(j=0;j<3;j++) { if(dp[n][j]>ma) { ma=dp[n][j]; id=j; } } printf("Case #%d:\n",++test); output(n,id);}int main(){ int i,j,t; for(i=0;i<3;i++) { for(j=0;j<4;j++) { lea[i][j]=log(lea[i][j]); if(j<3) wea[i][j]=log(wea[i][j]); } } scanf("%d",&t); while(t--) { scanf("%d",&n); for(i=1;i<=n;i++) { scanf("%s",s); if(strcmp(s,"Dry")==0) num[i]=0; else if(strcmp(s,"Dryish")==0) num[i]=1; else if(strcmp(s,"Damp")==0) num[i]=2; else num[i]=3; } solve(); } return 0;}

What is the relationship between the Hidden Markov Model and action recognition? Why should we use the Hidden Markov Model for action recognition?

The hidden Markov model can be classified as a state space method.

The state space method is also called the probability network-based method. This method can avoid Modeling Behavior interval, but the model training is complex. Because human motion is Markov, the current State is only affected by the previous state. This method regards human motion as a Markov process that cannot be directly observed, considering the dynamic process of human behavior, the human motion sequence is regarded as a traversal between states, and the human motion time-space sequence is recognized with probability. This method currently uses many Human Motion Recognition Methods. Its advantage is that it is robust to minor changes in motion at the time and spatial scale, which can avoid Modeling Behavior intervals and effectively solve the problem of motion duration. The disadvantage is that the computation is complicated. A non-linear model needs to be established. The model training is complicated and there is no fixed solution. You need to select an appropriate number of states and the dimension of the feature vector.

From: Yan taotao, et al., "Overview of visual human motion analysis," computer system application, vol. 20 (2), pp. 245-253,201 0.

What is the Training Problem of Hidden Markov Model (HMM?

I have used HMM, but only for speech recognition. Let me talk about it in the field of speech recognition.

I have never read UMDHMM. I wrote HMM-related code myself.

HMM involves "observed value" and "hidden state ". The "observation status" you mentioned should refer to the "Observation Value ".

For the first question,

According to the description, 1 and 2 indicate "hidden state ".

Assume that the optimal state of a voice unit is 1 1 2 2 3 4 5 5 5 5 6 (non-divergent State is not considered), where 1-> 1 is a state transfer; 1-> 2 is another state transfer; 2-> is another state transfer; and so on. In this way, this voice unit has undergone 10 state transitions;

For the second half of the first question, I don't understand what you want to say

For the second question, it seems that you are not very familiar with the basic concepts of HMM.

Generally, an observed value corresponds to a State;