After comparison with other Chinese word segmentation programs such as Ding decoding, it is found that IKAnalyzer's Chinese word segmentation effect is good and the program calls are simple. So IKAnalyzer is used as our Chinese word segmentation program.
The code that calls IKAnalyzer to perform Chinese word segmentation is very simple:
The code is as follows: |
Copy code |
/** * Input a Chinese statement and return a List. Each element in the List is a String-type Chinese phrase after word segmentation. */ Public static ArrayList <String> testJe (String testString) throws Exception { ArrayList <String> tokenList = new ArrayList <String> (); Analyzer analyzer = new IK_CAnalyzer (); Reader r = new StringReader (testString ); TokenStream ts = (TokenStream) analyzer. tokenStream ("", r ); Token t; While (t = ts. next ())! = Null ){ TokenList. add (t. termText ()); System. out. println (t. termText ()); } Return tokenList; } |
If this code is executed using Eclipse in Windows, it may be able to perform Chinese word segmentation well. However, when you place the program in Linux, such as Ubuntu, the Java Chinese word segmentation program suddenly becomes invalid. This is because IKAnalyzer1.4 uses GBK local encoding to store the dictionary, and linux environment, such as Ubuntu default character encoding is UTF-8, which will often cause the dictionary loading garbled characters in Linux and Unix environments, thus, word segmentation fails.
There are two solutions:
1. Use JDK nativetoascii.exe in Windows to convert the format from GBK to UTF-8;
2. Modify all the Dictionary load methods in the Dictionary class and change the encoding in InputStreamReader to "UTF-8 ";
The first solution is the simplest. The specific steps are as follows: 1. Right-click IKAnalyzer1.4.jar and unzip IKAnalyzer1.4.jar to a local directory. Four files are found in the orgmiraluceneanalysisdict directory, open with a text editor, and then use the UTF-8 format "save as" to re-store, overwrite the original file;
2. Compress the extracted files in zip format. It should be noted that the top-level directory should be org and META-INF;
3. Change the suffix zip to jar.
Then replace IKAnalyzer1.4.jar with the new jar to use the Java Chinese word segmentation program in Ubuntu.
In addition, if you plan to use Cron of Ubuntu to regularly execute this java Chinese word segmentation program, you may find that the Chinese word segmentation program is invalid, this is because the Cron environment variables in Linux may not be the same as the environment variables after normal users log on. Therefore, the program cannot perform Chinese word segmentation due to different default character sets. The solution is as follows, explicitly specifying the character set as a UTF-8 in the script:
The code is as follows: |
Copy code |
#! /Bin/bash ./Home/wangzhongyuan/. profile LANG = zh_CN.UTF-8 Lc_all= zh_CN.UTF-8 Export LANG LC_ALL |
Then you can call the java program in the script, and then use the Cron of Ubuntu to regularly execute this Chinese word segmentation program.