5, "Speech recognition with Speech synthesis models by marginalising over decision tree leaves" _1

Source: Internet
Author: User

2.1 Decision Tree Marginalization

  1. Now the basic process of decision tree marginalization has been understood
      1. briefly describes:
        1. This decision tree is a hmm synthesis decision tree
        2. The given Triphone callout is: r-ih+z
        3. Then, based on the given Triphone annotation, take advantage of the current speech synthesis Model, to infer the model of the speech recognition
        4. for a given triphone take advantage of the current speech synthesis decision tree, starting from the root node to run down the root node of the
          1. problem, the right is voiceless it? The sound on the right is obviously Z, it's voiced,
          2. so go to the left node, and then the question is: is the syllable heavy? Wipe, this problem is not in the context information, how to do?
          3. since it's not, I'm going to put the left child of the middle node into the final recognition model,
          4. and then go to the right node, the question is: is fricative on the right? Yes, go to the right leaf node
          5. Finally, the parameters of the r-ih+z identification model are calculated by combining G1 and G3 together.
      2. I probably understand how decision tree marginalization is used to make cross-lingual adaptation.
        1. is not the first to put a language, such as English corpus, training to get average voice model, and then get the decision tree shown.
        2. Then, to get the model file for another language, you can walk through the decision tree from the root node of English, and then get the Cantonese model file,
        3. For example, given a Cantonese contextual information,

          -jyu+6#sil+x$kei+4&0+0!0+0|0+ ... 0#0^0#0_0#0-0$0&0$0!0$0|

        4. Then, to traverse the English decision tree, the final Cantonese syllable of the model file, is a linear combination of the parameters of several leaf nodes in English.
        5. above is just speculation, not necessarily the right
  2. But what does not understand is the principle?
    1. Why is it that in the decision tree traversal of the triphone, the child node of the middle node of the problem that is not related to the current Triphone context information is included in the final parameter calculation to identify the Triphone??
  3. Now it is clear that the edge of the decision tree, how can it be used to say unsupervised intra-lingual speaker adaptive? What is the process?
    1. M

5, "Speech recognition with Speech synthesis models by marginalising over decision tree leaves" _1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.