Using Lucene to implement multiple documents keyword search demo (ii)

Source: Internet
Author: User

The last time when using Lucene to index the use of the Standanalyzer word breaker, and this word breaker in the Chinese word is only a mechanical word division, so use it Lucene can not be very good on the Chinese index, also can not achieve the Chinese keyword retrieval, So in fact, the last practice can only be conducted in English.
In order to solve this problem, we can use Ikanalyzer, which is based on the open source project Lucene as the application subject, combined with the dictionary word segmentation and Grammar analysis algorithm of Chinese sub-phrase parts. It supports both Chinese and English words.
Then use it to improve the retrieval function, first of all, to download the Ikanalyzer development package, I upload the development package to the key: 7j78 This package I have already practiced, before the next version will be in conflict with the latest Lucene. The package contains a jar package, two profiles and a PDF using the document, a configuration file that is used to configure the extension dictionary and a dictionary of meaningless words that need to be filtered, and the other is to write directly inside which meaningless words need to be filtered.

The following code uses Standanalyzer and Ikanalyzer to separate the same Chinese string separately

public static void Analysis () {Analyzer Luceneanalyzer = new StandardAnalyzer ();//Word breaker tool try {Tokenstream tokenstream= Luceneanalyzer.tokenstream ("", "Tomorrow is the National Day");   Chartermattribute Term=tokenstream.getattribute (chartermattribute.class);   Tokenstream.reset ();        Traverse the word breaker        while (Tokenstream.incrementtoken ()) {            System.out.print (term.tostring () + "|");        }        Tokenstream.clearattributes ();        Tokenstream.close ();} catch (IOException e) {//TODO auto-generated catch Blocke.printstacktrace ();}} public static void Analysisbyik () throws Ioexception{string text= "Tomorrow is the National Day"; Iksegmenter ik = new Iksegmenter (New Stringre Ader (text), true); Lexeme lexeme = null;    while ((Lexeme = ())!=null) {    System.out.print (Lexeme.getlexemetext () + "|");}    }

The results of the operation are as follows:

You can see that the results of the ikanalyzer participle are more consistent with expectations.

The following code uses Ikanalyzer to replace the original Standanalyer when creating the document index.

public static void Buildindex (String idir,string ddir) throws Ioexception{file Indexdir = new File (idir);//Index store directory File data Dir = new file (ddir);//directory of files to be indexed analyzer Luceneanalyzer = new Ikanalyzer ();//Participle tool file[] datafiles = Datadir.listfiles ( ); Indexwriterconfig indexconfig = new Indexwriterconfig (version.latest, Luceneanalyzer); Fsdirectory fsdirectory = Null;indexwriter IndexWriter = null;try {fsdirectory = (IndexDir);//  Index directory IndexWriter = new IndexWriter (fsdirectory, indexconfig);//The object used to create the index long startTime = new Date (). GetTime (); for (int i = 0; i < datafiles.length; i++) {if (Datafiles[i].isfile () && datafiles[i].getname (). EndsWith (". txt")) {Document document = new Document () ;//Represents a document SYSTEM.OUT.PRINTLN (Datafiles[i].getpath ()); Reader Txtreader = new FileReader (datafiles[i]); FieldType FieldType = new FieldType (); Fieldtype.setindexed (True);d Ocument.add (New TextField ("path", Datafiles[i]. Getcanonicalpath (), Store.yes));//field is a property used to describe a document, such as a document set in two properties, a path and a content. Add (New Field ("Contents", Txtreader, FieldType)); indexwriter.adddocument (document);}} Indexwriter.commit ();//indexing a document long endTime = new Date (). GetTime (); System.out.println ("It takes" + (Endtime-starttime) + "milliseconds to create index for the files in directory" + Dat Adir.getpath ());} catch (IOException e) {e.printstacktrace (); try {indexwriter.rollback ();} catch (IOException E1) {e1.printstacktrace ();}} finally {if (indexwriter!=null) {indexwriter.close ();}}}

Using Lucene to implement multiple documents keyword search demo (ii)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.