Recently I have been studying nutch. Based on the online tutorial, I used plug-ins to integrate Chinese word segmentation into nutch1.2 to run crawlers. However, after converting the project ant into a war, A page can be displayed when you search for words that are not indexed on a webpage. However, if you search for the words in an index, a blank page will appear and there will be nothing. The Tomcat console will display the search results, no error is reported. So I addedCodeRemoved, and the result showed that the original word segmentation of the nutch was replaced. Some tutorials on the Internet say that you want to modify the original word segmentation of the nutchdocumentanalyzer. Java to replace the original word segmentation of the nutch with your own word segmentation. The code is
Public nutchdocumentanalyzer (configuration conf ){
This. conf = conf;
Content_analyzer = new contentanalyzer (CONF );
Anchor_analyzer = new anchoranalyzer ();
// Paoding = paodingmaker. Make (); add your own word divider.
// Paoding_analyzer = new paodinganalyzer (). querymode (paoding );
}
Public tokenstream (string fieldname, reader ){
Analyzer analyzer;
If ("anchor". Equals (fieldname ))
Analyzer = anchor_analyzer;
Else
Analyzer = content_analyzer;
// Analyzer = paoding_analyzer;
Return analyzer. tokenstream (fieldname, Reader );
}
I commented out my Chinese Word divider.
If you change the word segmentation to the original one, there will be no blank pages. I don't know why, and I haven't studied the source code in depth. Another problem is that if you only use the word segmentation in the plug-in to do not replace the word segmentation of nutch, the word splitting result obtained when the crawler is executed is strange. Sometimes the whole sentence is separated.
2010-10-4