1. source of the problem
After word splitting is added, the accuracy of the results is improved, but the user's response to the returned results is slow. The reason is that Lucene performs word segmentation many times during runtime when highlighting the keywords related to each document. This reduces performance.
2. Solution
This problem can be solved by a new feature in lucene1.4.3. Currently, term vector supports token. getpositionincrement (), Token. startoffset (), and Token. endoffset. After using the newly added token information in Lucene to save the results, you do not need to parse each document at runtime for highlighting. The field method is used to control whether to save the information. Modify highlightertest. Java's Code As follows:
// Save the term position information when adding a document.
Private void adddoc (indexwriter writer, string text) throws ioexception
{
Document d = new document ();
// Field F = new field (field_name, text, true );
Field F = new field (field_name, text,
Field. Store. Yes, field. Index. tokenized,
Field. termvector. with_positions_offsets );
D. Add (f );
Writer. adddocument (d );
}
// Use the term location information to save the highlight time.
Void dostandardhighlights () throws exception
{
Highlighter = new highlighter (this, new queryscorer (query ));
Highlighter. settextfragmenter (New simplefragmenter (20 ));
For (INT I = 0; I
{
String text = hits.doc (I). Get (field_name );
Int maxnumfragmentsrequired = 2;
String fragmentseparator = "...";
Termpositionvector TPV = (termpositionvector) reader. gettermfreqvector (hits. ID (I), field_name );
// If no stop words is removed, you can change it to tokensources. gettokenstream (TPV, true). Further increase the speed.
Tokenstream = tokensources. gettokenstream (TPV );
// Analyzer. tokenstream (field_name, new stringreader (text ));
String result =
Highlighter. getbestfragments (
Tokenstream,
Text,
Maxnumfragmentsrequired,
Fragmentseparator );
System. Out. println ("\ t" + result );
}
}
Finally, an additional judgment in the highlight package is removed. There is no obvious word limit for Chinese, so the following judgment is wrong:
Tokengroup. isdistinct (token)
In this way, Chinese word segmentation does not affect the query speed.
This article from http://www.tianyablog.com/blogger/post_show.asp? Blogid = 114714 & postid = 2852189