Lucene splitter-ding jieniu

Source: Internet
Author: User

Lucene splitter-ding jieniu

 

Note: The configuration of environment variables here requires the system to be restarted and the results will be

 

I now use the Lucene version of the test is a lucene-2.4.0, it can now support the Chinese number Daquan word segmentation, but it is a keyword mining tool to choose one yuan word segmentation (word splitting) method, that is, every Chinese character is treated as a word, which will cause a huge index and affect the query power. therefore, most of the brothers who use Lucene will think about using other Chinese Word Segmentation packages. Here I will introduce the most commonly used word segmentation package "Kitchen solution niu, of course, it is also a recommendable Chinese Word Segmentation package.

This article first explains how Lucene combines the word segmentation package "chezi Jie niu". before integrating the word segmentation package, we will go through an example to demonstrate the word segmentation function of Lucene's built-in Chinese Word divider.

Package gzu. lyq. luceneanalyzer;

Import java. Io. stringreader;

Import org. Apache. Lucene. analysis. analyzer;

Import org. Apache. Lucene. analysis. Token;

Import org. Apache. Lucene. analysis. tokenstream;

Import org. Apache. Lucene. analysis. Standard. standardanalyzer;

// Test the Lucene built-in Chinese Word Divider

Public class luceneanalyzer {

Public static void main (string [] ARGs) throws exception {

// Standardanalyzer: mona1 Word Segmentation

Analyzer analyzer = new standardanalyzer ();

String indexstr = "My QQ number is 58472399 ";

Stringreader reader = new stringreader (indexstr );

Tokenstream Ts = analyzer. tokenstream (indexstr, Reader );

Token T = ts. Next ();

While (T! = NULL ){

System. Out. Print (T. termtext () + "");

T = ts. Next ();

}

}

}

Word segmentation result: My QQ code is 234456

After the above example, we will find that Lucene's built-in word divider splits Chinese characters word by word. This is the most initial word segmentation method, and most of them are not used today.

Next we will go to the topic to explain the combination of Lucene and the Chinese word segmentation package "chezi Jie niu.

The value of the variable [url] extract in "Kitchen" is: e: \ paoding2_0_4 \ DIC. Step 3 sets E: \ paoding2_0_4 \ paoding-dic-home.properties feature files under the src directory copy to the src directory of the project, add a line of paoding. dic. home = E:/paoding2_0_4/DIC. Now we have completed the combination of Lucene and "chezi Jie niu". Let's take a test.

Package gzu. lyq. luceneanalyzer;

Import java. Io. stringreader;

Import org. Apache. Lucene. analysis. analyzer;

Import org. Apache. Lucene. analysis. Token;

Import org. Apache. Lucene. analysis. tokenstream ;*

@ Param url = "http://www.shoudashou.com", "plus ");

* @ Param url = "http://www.fanselang.com", "plus ");

* @ Param url = "http://www.3h5.cn", "plus ");

* @ Param url = "http://www.4lunwen.cn", "plus ");

* @ Param url = "http://www.zx1234.cn", "plus ");

* @ Param url = "http://www.penbar.cn", "plus ");

* @ Param url = "http://www.lunjin.net", "plus ");

* @ Param url = "http://www.ssstyle.cn", "plus ");

* @ Param url = "http://www.91fish.cn", "plus ");

Import net. paoding. analysis. analyzer. paodinganalyzer;

// Test the word segmentation function of the Chinese word divider "chezi Jie niu"

Public class paodinganalyzer {

Public static void main (string [] ARGs) throws exception {

Analyzer analyzer = new paodinganalyzer ();

String indexstr = "My QQ number is 3453245 ";

Stringreader reader = new stringreader (indexstr );

Tokenstream Ts = analyzer. tokenstream (indexstr, Reader );

Token T = ts. Next ();

While (T! = NULL ){

System. Out. Print (T. termtext () + "");

T = ts. Next ();

}

}

}

Word segmentation result: My QQ number is 3453245

If indexstr is replaced with "Long live Chinese Civil Republic", the result of Word Segmentation is:

Long live the Chinese People's Republic of China

Note: when using the kitchen word divider, the first way to participate in the package can not have Chinese, it seems that the Chinese do not recognize, to participate in the common-logging.jar package, or else will prompt to find the class.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.