Text Classification (2) feature weight quantizer (Representation of document conversion volume)

Source: Internet
Author: User

The last section implements the tokenizer and implements word segmentation for simple, standar, CN, and ICTCLAS.Algorithm. In this section, the conversion volume is represented and named as the feature weight quantizer. I only implement one algorithm-TFIDF algorithm.

The project class diagram is as follows:

 

 

TestProgramAs follows:

 

Code
1 Namespace Waemz. chnglobal. Test
2 {
3 Class Program
4 {
5 Static   Void Main ( String [] ARGs)
6 {
7 String [] Docs =   New   String [ 5 ];
8 Docs [ 0 ] =   " Simplespliter is used to separate space, punctuation, and remove punctuation. " ;
9 Docs [ 1 ] =   " Standarspliter uses Chinese word segmentation. It uses spaces for English Word Segmentation and removes punctuation. " ;
10 Docs [ 2 ] =   " The cnspliter word segmentation method is Chinese word segmentation, which removes a single meaningless English letter ;-_-! " ;
11 Docs [ 3 ] =   " The ICTCLAS word segmentation method is suitable for use. It filters meaningless Chinese characters such as "we" and "is", and the Chinese word segmentation effect is also ideal. " ;
12 Docs [ 4 ] =   " I have encapsulated a word divider by myself. The Lucene. net. Class diagram is as follows: " ;
13
14 String [] [] Terms =   New   String [ 5 ] [];
15
16 Abschnspliter spliter =   New Ictclasspliter ();
17 For ( Int I =   0 ; I <   5 ; I ++ )
18 {
19 String Str = Spliter. chnsplit (Docs [I], " | " );
20 Terms [I] = Str. Split ( ' | ' );
21 }
22
23 Absmeasure mea =   New Tfidfmeasure (terms );
24
25 Double [] [] Vectors =   New   Double [ 5 ] [];
26 For ( Int I =   0 ; I <   5 ; I ++ )
27 {
28Vectors [I]=MEA. getvectormeasure (I );
29}
30
31 For ( Int I =   0 ; I < Vectors. length; I ++ )
32 {
33 For ( Int J =   0 ; J < Vectors [I]. length; j ++ )
34 {
35Console. Write (vectors [I] [J]+"--");
36}
37 Console. writeline ( " /// // \ N " );
38 }
39
40 Console. readkey ();
41
42
43
44 }
45 }
46 }

 

 

The test result is output as follows:

Improvements:

1. The vectors calculated based on TFIDF indicate a high dimension. Generally, the dimension is equal to the number of words after deduplication in all samples. The next step is to perform dimensionality reduction. My existing ideas for dimensionality reduction are as follows: (1) select features in advance (feature selection methods include information gain and chi-square test), and then use TFIDF for feature extraction; (2) calculate the TFIDF and perform dimension reduction. (3) effective filtering of disabled words, punctuation marks, and other special characters in the word segmentation phase also reduces the dimension. In fact, punctuation marks do not play a role in the document. (4) In text classification, you can also establish a specialized vocabulary library related to classification in advance. When feature extraction is compared with the vocabulary library, words in the vocabulary library do not exist, words that are meaningless to classification. In this way, dimensionality reduction is performed.

Post it laterCodeDownload.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.