Generating a language model with Srilm

Source: Internet
Author: User

The main goal of Srilm is to support the estimation and evaluation of language models. It is estimated that a model is obtained from the training data (training set), including the maximum likelihood estimation and the corresponding smoothing algorithm, while the evaluation calculates its perplexity from the test set. Its most basic and core modules are the N-gram module, which is also the earliest implemented module, including two tools: Ngram-count and Ngram, which are used to estimate the confusion of language models and computational language models.

1. Statistical corpus generates N-GRAM statistical files
Ngram-count-vocab segment_dict.txt-text Train_data-order 3-write my.count-unk

-vocab dictionary file, a line representing a tangent word, in the following format:

Hello, Chinese people

-text Corpus, a row of data, the data in the row is separated by a space to denote the word cut, the format is as follows:

Chinese people Andy Lau song is nice?

-order largest n-ary model, 3 for statistical 1-ary Model (Unigram), 2-element model (Bigram), 3-ary model (trigram)

-write generated statistics file in the following format (Ngram count):

<s>     2<s> China        1<s> Chinese people   1<s> Andy Lau      1<s> Andy Lau <unk>        1 China    1 Chinese people       1 Chinese people </s>  1 People    1 people </s>       1</s>    2 Andy Lau  1 Andy Lau <unk>    1 Andy Lau <unk > <unk>      1<unk>   4<unk> <unk>     3<unk> <unk> <unk>       2 <unk> <unk> </s>        1<unk> </s>      1

-unk is not in the dictionary to express the <unk>

2. Generating a language model
Ngram-count-vocab segment_dict.txt-read My.count-order 3-lm my.lm-kndiscount1-kndiscount2-kndiscount3

-read Read the statistics file

-LM produces a language model file that produces the following format:

\datangram 1=6ngram 2=4ngram 3=0\1-grams:-0.4771213      </s>-99     <s>     -99-0.7781513      China    -99-0.7781513      people    -99-0.7781512      Hello -0.7781513      Andy Lau \2-grams:-0.30103        <s> China -0.30103        <s> Andy Lau 0       Chinese People 0       peoples </s>\3-grams:\end

-kndiscount1 on the 1-dollar model discount smoothing method, there are many, such as Good-turing,kneser-ney, etc.

3. Calculating the confusion of test data using the language model
NGRAM-PPL Test.txt-order 3-lm MY.LM

The format of the test data is also a line representing a sentence, and each sentence is separated by a space to denote the word cut

File test.txt:2 sentences, 5 words, 0 OOVs4 zeroprobs, logprob= -0.7781513 ppl= 1.817121 ppl1= 6.000001

  

Generating a language model with Srilm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.