DotLucene source code shortest notes (1) Addendum: write a simple Chinese Word divider ChineseAnalyzer

Source: Internet
Author: User
For more information about the principles, see DotLucene source code shortest notes (1): Lucene. Net. Analysis. This article is a simple Chinese analyzer compiled based on the Analysis in the previous article ).
From the DotLucene source code shortest notes (1): Lucene. Net. Analysis can know that there are two base classes related to Word Segmentation:

Lexical Analyzer: a class for lexical filtering and analysis. It is actually a comprehensive packaging class for word divider and filter.

Tokenizer (Tokenizer): Word Segmentation for text, which may be a single word, word, or binary segmentation.

The most basic concept is Token, which is the most basic unit of DotLucene. Each word is a Token when it is segmented by a single word. If it is segmented by a Chinese Word, each word is a Token,

In addition, ChineseAnalyzer is not a Chinese word segmentation tool. The Chinese word segmentation tool only analyzes the results of Chinese word segmentation into a format that can be recognized by the DotLucene indexing tool,

Tokenizer is implemented first, and a third-party word segmentation component is used in the Code for experiment.

/** // <Summary>
/// DotLucene Chinese Word Divider
/// Author: Wei Weike http://www.cnblogs.com/kwklover
/// Dated: 2006/10/24
/// </Summary>
Public class ChineseTokenizer: Tokenizer
{
Private List <string> ioBuffer;
Private int offSet = 0; // offSet.
Private int position =-1; // The position of the word in the buffer.
Private int length = 0; // the length of a word.
Private int start = 0; // start offset.

Public ChineseTokenizer (System. IO. TextReader input)
: Base (input)
{
// A third-party Chinese Word Segmentation component is used here.
IoBuffer = Sj110.Com. Chinese. Tokenizer. Tokenize (input. ReadToEnd ());
}

// The word divider of DotLucene is to implement the Next method of Tokenizer and construct each word decomposed into a Token, because Token is the basic unit of DotLucene word segmentation.
Public override Token Next ()
{
Position ++;
If (position <ioBuffer. Count)
{
Length = ioBuffer [position]. Length;
Start = offSet;
OffSet + = length;
Return new Token (ioBuffer [position], start, start + length );
}

Return null;
}
}

Analyzer code: public class ChineseAnalyzer: Analyzer
{
Public ChineseAnalyzer ()
{
}
Public override TokenStream (TextReader reader)
{
TokenStream result = new ChineseTokenizer (reader );
Result = new LowerCaseFilter (result );
// Result = new StopFilter (result, stopSet); // you can write a filter by yourself.

Return result;
}
}

The above is a simple DotLucene ChineseAnalyzer, and the algorithm is not optimal. It is mainly used to understand how to implement ChineseAnalyzer.

The Chinese word segmentation component involved in this article: Click here to download the Chinese word segmentation component. The copyright of the Chinese word segmentation component belongs to the original author. the download is provided for your convenience. for commercial purposes, please contact the author.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.