Lucene 3.0 Word Segmentation System

Source: Internet
Author: User
Tags solr
Article from http://hi.baidu.com/cdefg198/blog/item/660a5d19c420e61f35fa4137.html

Lucene 3.0 Word Segmentation System

1. stopanalyzer

Stopanalyzer can filter specific strings and words in words and convert them to lowercase letters.

2. standardanalyzer

Standardanalyzer performs Word Segmentation Based on Spaces and symbols. It can also analyze numbers, letters, e-mail addresses, IP addresses, and Chinese characters. It can also filter word lists, it is used to replace the filtering function implemented by stopanalyzer.

3. simpleanalyzer

Simpleanalyzer provides a word divider for analyzing basic Spanish characters. when processing a word unit, it uses non-letter characters as the delimiter. Word divider cannot filter words to analyze and separate words. The output vocabulary unit converts lowercase characters and removes punctuation marks and other delimiters.

In the development of the full-text search system, it is usually used to support the processing of Spanish symbols, but does not support Chinese characters. Because the word filtering function is not completed, you do not need to support word segmentation. The word segmentation policy is simple. Non-English characters are used as separators, and word segmentation is not required.

4. whitespaceanalyzer

Whitespaceanalyzer uses spaces as the delimiter word divider. When processing a vocabulary unit, use space characters as the separator. The word divider does not filter words or convert lowercase characters.

In practice, it can be used to support the processing of Spanish symbols in a specific environment. Because the word filtering and lowercase character conversion functions are not completed, you do not need to support the word segmentation. The word segmentation policy simply uses non-English characters as the delimiter and does not require word segmentation.

5. keywordanalyzer

Keywordanalyzer uses the entire input as a separate vocabulary Unit to facilitate the indexing and retrieval of special types of text. It is very convenient to use the keyword tokenizer to create index items for zip code, address, and other text information.

6. cjkanalyzer

Cjkanalyzer internally calls the cjktokenizer tokenizer to perform word segmentation for Chinese characters, and uses the stopfilter filter to filter Chinese words. It has been deprecated in lucene3.0.

7. chineseanalyzer

The chineseanalyzer function is basically the same as that of standardanalyzer in processing Chinese characters. It is split into a single dual-byte Chinese character. It has been deprecated in lucene3.0.

8. perfieldanalyzerwrapper

Perfieldanalyzerwrapper is mainly used to use different analyzer for different fields. For example, for file names, you need to use keywordanalyzer, and for file content, you only need to use standardanalyzer. You can use addanalyzer () to add a classifier.

9. ikanalyzer

The dictionary-based full split of forward and reverse directions and the maximum matching splitting of forward and reverse directions are implemented. Ikanalyzer is a third-party tokenizer that inherits from Lucene's analyzer class and processes Chinese text.

10. Je-Analysis

Je-analysis is the Lucene Chinese Word Segmentation component and needs to be downloaded.

11. ictclas4j

The ictclas4j Chinese Word Segmentation System is a Java open-source word segmentation project completed by sinboy Based on freeictclas developed by Chinese Emy of Sciences Zhang huaping and Liu Qun, which simplifies the complexity of the original word segmentation program, it aims to provide a better learning opportunity for the majority of Chinese word segmentation enthusiasts.

12. imdict-Chinese-analyzer

Imdict-Chinese-analyzer is
The intelligent Chinese Word Segmentation module of the imdict intelligent dictionary. The algorithm is based on the Hidden Markov Model (HMM ), it is the re-Implementation of the ICTCLAS Chinese word segmentation program (based on Java) of the institute of Computing Technology of the Chinese Emy of sciences. It can directly provide support for simplified Chinese word segmentation for Lucene search engines.

13. paoding Analysis

Paoding analysis Chinese Word Segmentation is highly efficient and scalable. Introduce metaphor, adopt completely object-oriented design, and have advanced ideas. Its efficiency is relatively high. on personal machines with piII 1g memory, 1 million Chinese characters can be accurately segmented in one second. You can use dictionary files without limit to the number of words to effectively split an article so that you can define the word classification. Ability to properly parse unknown words.

14. mmseg4j

Mmseg4j
Chih-hao Tsai's
Mmseg algorithm (http://technology.chtsai.org/mmseg/) to achieve
Lucene's
Tokenizerfactory of analyzer and SOLR for ease of use in Lucene and SOLR.
The mmseg algorithm has two word segmentation methods: simple and complex, which are based on the forward maximum matching. Complex adds four rules. Official saying: the correct word recognition rate has reached
98.41%. Mmseg4j has implemented these two word segmentation algorithms.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.