The last time when using Lucene to index the use of the Standanalyzer word breaker, and this word breaker in the Chinese word is only a mechanical word division, so use it Lucene can not be very good on the Chinese index, also can not achieve the Chinese keyword retrieval, So in fact, the last practice can only be conducted in English.
In order to solve this problem, we can use Ikanalyzer, which is based on the open source project Lucene as the application subject, combined with the dictionary word segmentation and Grammar analysis algorithm of Chinese sub-phrase parts. It supports both Chinese and English words.
Then use it to improve the retrieval function, first of all, to download the Ikanalyzer development package, I upload the development package to the key: 7j78 This package I have already practiced, before the next version will be in conflict with the latest Lucene. The package contains a jar package, two profiles and a PDF using the document, a configuration file that is used to configure the extension dictionary and a dictionary of meaningless words that need to be filtered, and the other is to write directly inside which meaningless words need to be filtered.
The following code uses Standanalyzer and Ikanalyzer to separate the same Chinese string separately
public static void Analysis () {Analyzer Luceneanalyzer = new StandardAnalyzer ();//Word breaker tool try {Tokenstream tokenstream= Luceneanalyzer.tokenstream ("", "Tomorrow is the National Day"); Chartermattribute Term=tokenstream.getattribute (chartermattribute.class); Tokenstream.reset (); Traverse the word breaker while (Tokenstream.incrementtoken ()) { System.out.print (term.tostring () + "|"); } Tokenstream.clearattributes (); Tokenstream.close ();} catch (IOException e) {//TODO auto-generated catch Blocke.printstacktrace ();}} public static void Analysisbyik () throws Ioexception{string text= "Tomorrow is the National Day"; Iksegmenter ik = new Iksegmenter (New Stringre Ader (text), true); Lexeme lexeme = null; while ((Lexeme = Ik.next ())!=null) { System.out.print (Lexeme.getlexemetext () + "|");} }
The results of the operation are as follows:
明|天|就|是|国|庆|节|了|
明天|就是|国庆节|了|
You can see that the results of the ikanalyzer participle are more consistent with expectations.
The following code uses Ikanalyzer to replace the original Standanalyer when creating the document index.
public static void Buildindex (String idir,string ddir) throws Ioexception{file Indexdir = new File (idir);//Index store directory File data Dir = new file (ddir);//directory of files to be indexed analyzer Luceneanalyzer = new Ikanalyzer ();//Participle tool file[] datafiles = Datadir.listfiles ( ); Indexwriterconfig indexconfig = new Indexwriterconfig (version.latest, Luceneanalyzer); Fsdirectory fsdirectory = Null;indexwriter IndexWriter = null;try {fsdirectory = Fsdirectory.open (IndexDir);// Index directory IndexWriter = new IndexWriter (fsdirectory, indexconfig);//The object used to create the index long startTime = new Date (). GetTime (); for (int i = 0; i < datafiles.length; i++) {if (Datafiles[i].isfile () && datafiles[i].getname (). EndsWith (". txt")) {Document document = new Document () ;//Represents a document SYSTEM.OUT.PRINTLN (Datafiles[i].getpath ()); Reader Txtreader = new FileReader (datafiles[i]); FieldType FieldType = new FieldType (); Fieldtype.setindexed (True);d Ocument.add (New TextField ("path", Datafiles[i]. Getcanonicalpath (), Store.yes));//field is a property used to describe a document, such as a document set in two properties, a path and a content. Add (New Field ("Contents", Txtreader, FieldType)); indexwriter.adddocument (document);}} Indexwriter.commit ();//indexing a document long endTime = new Date (). GetTime (); System.out.println ("It takes" + (Endtime-starttime) + "milliseconds to create index for the files in directory" + Dat Adir.getpath ());} catch (IOException e) {e.printstacktrace (); try {indexwriter.rollback ();} catch (IOException E1) {e1.printstacktrace ();}} finally {if (indexwriter!=null) {indexwriter.close ();}}}
Using Lucene to implement multiple documents keyword search demo (ii)