Elasticsearch provides the default word divider, which separates each Chinese character, instead of the keyword. For example:
Curl-xpost "http: // localhost: 9200/userinfo/_ analyze? Analyzer = Standard & pretty = true & text = I am Chinese"
We will get the following result:
{Tokens: [{token: textstart_offset: 2end_offset: 6 type: <alphanum> position: 1} {token: I start_offset: 9end_offset: 10 type: <ideographic> position: 2} {token: Yes start_offset: 10end_offset: 11 type: <ideographic> position: 3} {token: start_offset: 11end_offset: 12 type: <ideographic> position: 4} {token: Country start_offset: 12end_offset: 13 type: <ideographic> position: 5} {token: person start_offset: 13end_offset: 14 type: <ideographic> position: 6}]}
Under normal circumstances, this is not the result we want. For example, we prefer word segmentation like "Chinese", "China", and "I". In this way, we need to install the Chinese word segmentation plug-in, ik implements this function.
Elasticsearch-analysis-ik is a Chinese Word Segmentation plug-in that supports custom word libraries.
Installation steps:
1, download source code to the GitHub site, the site address is: https://github.com/medcl/elasticsearch-analysis-ik
Click Download zip at the bottom of the Right to download the source code elasticsearch-analysis-ik-master.zip.
2. Extract the elasticsearch-analysis-ik-master.zip file, enter the download directory, and run the following command:
unzip elasticsearch-analysis-ik-master.zip
3. Copy the config/ik folder in the extracted directory to the config folder in the es installation directory.
4. Because it is the source code, you need to use Maven to package it, enter the decompressed folder, and execute the command:
mvn clean package
5. Copy the packaged JAR file elasticsearch-analysis-ik-1.2.8-sources.jar to the lib directory of the es installation directory.
6. Add the IK configuration in the es configuration file config/elasticsearch. yml and add the following at the end:
index: analysis: analyzer: ik: alias: [ik_analyzer] type: org.elasticsearch.index.analysis.IkAnalyzerProvider ik_max_word: type: ik use_smart: false ik_smart: type: ik use_smart: true
Or
index.analysis.analyzer.ik.type : “ik”
7. Restart the elasticsearch service to complete the configuration. The revenue command is as follows:
Curl-xpost "http: // localhost: 9200/userinfo/_ analyze? Analyzer = ik & pretty = true & text = I am Chinese"
The test results are as follows:
{Tokens: [{token: textstart_offset: 2end_offset: 6 type: englishposition: 1} {token: Me start_offset: 9end_offset: 10 type: cn_charposition: 2} {token: Chinese start_offset: 11end_offset: 14 type: cn_wordposition: 3} {token: China start_offset: 11end_offset: 13 type: cn_wordposition: 4} {token: Chinese start_offset: 12end_offset: 14 type: cn_wordposition: 5}]}
Note:
1. The elasticsearch plug-in was originally installed using the plugin command. However, when I installed ik on the local machine, it was not successful. Therefore, I installed the plug-in using the source code package.
2, the way to customize the dictionary, please refer to the https://github.com/medcl/elasticsearch-analysis-ik
Install the Chinese word segmentation plug-in IK for elasticsearch