[Language Processing and Python] 7.3 develop and evaluate the Splitter

Source: Internet
Author: User
Tags nltk

Read IOB format and CoNLL2000 block Corpus

CoNLL2000 is the text that has been loaded with the annotation. It uses the IOB symbol to block it.

This corpus provides NP, VP, and PP types.

For example:

hePRPB----

Function of chunk. conllstr2tree (): Creates a tree representation of a string.

For example:

>>>text = >>>nltk.chunk.conllstr2tree(text,chunk_types=[]).draw()

Running result:

>>>>>> Conll2000.chunked _ sents () [99 // DT cup //// NNPStone // Fortran $ story //.)

>>> conll2000.chunked_sents(,chunk_types=[])[99//DT cup/////NNPStone///PRP$story//.)

Simple assessment and benchmark

>>>grammar= r>>>cp =>>>87.7%70.6%67.8%-Measure: 69.2%

We can construct an Unigram annotator to create a splitter.

Example 7-4 = [[(t, c) sent = [pos (word, pos) = [chunktag (pos, chunktag) = [(word, pos, chunktag) nltk. chunk. conlltags2tree (conlltags)

Note that the workflow of parse is as follows:

1. Take a labeled sentence as the input.

2. Start with the part-of-speech mark extracted from that sentence

3. Use the self. tagger trained in the constructor to add the IOB block tag for the part of speech.

4. Extract the block tag and combine it with the original sentence.

5. combine them into a block tree.

After completing the block annotator, use the block corpus library to train him.

>>>test_sents = conll2000.chunked_sents(,chunk_types=[>>>train_sents = conll2000.chunked_sents(,chunk_types=[>>>unigram_chunker=>>>92.9%79.9%86.8%-Measure: 83.2%
>>>postags= sorted(set(pos  sent  (word,pos) >>>, ), (, ), (, ), (, ), (, , ), (, ), (, ), (, ), (, , ), (, ), (, ), (, , ), (, ), (, ), (, , ), (, ), (, ), (, , ), (, ), (, ), (, , ), (, ), (, ), (, ), (, , ), (, ), (, ), (, ), (, , ), (, ), (, ), (, , ), (, ), (, ), (, )]

We can also create bigramTagger.

>>>bigram_chunker=>>>93.3%82.3%86.8%-Measure: 84.5%

Train classifier-based Splitters

Currently, we have discussed the following splitters: Regular Expression splitters and n-gram splitters, which determine which parts are created based entirely on part-of-speech tags. However, sometimes part-of-speech tagging is insufficient to determine how a sentence should be segmented.

For example:

(3) a. Joey/NNsold/VBD the/DT farmer/NN rice/NN .//NNbroke/VBD my/DTcomputer/NNmonitor/NN./.

Although the tags are the same, the parts are obviously different.

Therefore, we need to use the word content information as a supplement to the part-of-speech mark.

If you want to use the content information of words, you can use the classifier-based annotator to block sentences.

The Basic Code of the NP splitter based on classifier is as follows:

 

 = tagged_sent == i, (word, tag) ===, trace==== = [[((w,t),c)  (w,t,c)  sent ===[(w,t,c)  ((w,t),c)  nltk.chunk.conlltags2tree(conlltags)

Then, define a feature extraction function:

 

>>>= {>>>chunker =>>>92.9%79.9%86.7%-Measure: 83.2%

We can also improve the classification tag to add a part-of-speech mark.

 

>>>= i === , = sentence[i-1 {: pos,>>>chunker =>>>93.6%81.9%87.1%-Measure: 84.4%

We can not only use two parts of speech as features, but also add a word content.

>>>= i === , = sentence[i-1 {: pos,: word,>>>chunker =>>>94.2%83.4%88.6%-Measure: 85.9%

We can try to add several more feature extraction methods to increase the performance of the splitter. For example, the following code adds prefetch features, pairing functions, and complex contextual features. The last feature is tags-since-dt, which creates a string to describe all part-of-speech tags that have been encountered since the last qualified word.

>>>= i === , = sentence[i-1 i ==len(sentence)-1= , = sentence[i+1 {:  %:  %>>>= word,pos  pos=== >>>chunker =>>>95.9%88.3%90.7%-Measure: 89.5%

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.