The first thing to make clear is that Elasticsearch is Lucene-based, and many of its basic components are provided by Apache Lucene, while ES provides a higher level of encapsulation and distribution enhancements and extensions.
So if you want to master the knowledge of ES in the word segmentation, we must first start from Lucene, otherwise it will only be confused, of course, most of US developers only focus on how to use, the bottom of the east, there is not too much time to delve into, this also sentient can original, encountered problems to explore, It is also not a way, if there is time, or suggest a look at the basic knowledge of lucene.
In Elasticsearch or SOLR, the configuration-based pluggable plug-in is provided, which is managed in a way that is flexible in the form of a composite configuration, in ES, an analysis collection
Can contain multiple analyzer, and an analyzer consists of a single tokenizer, 0 or more tokenfilter, and a tokenizer can contain 0 or more charfilter. The overall execution process is as follows:
650) this.width=650; "Src=" http://dl2.iteye.com/upload/attachment/0113/2519/ 6d5a46ad-568e-38b9-a3b0-bb56a0b47229.png "title=" click to view original size picture "class=" Magplus "width=" "height=" 383 "style=" border : 0px;font-size:14px;white-space:normal;background-color:rgb (255,255,255); Font-family:helvetica, Tahoma, Arial, sans-serif;line-height:25.1875px; "/>
A template in ES is configured as follows:
Java code 650) this.width=650; "class=" star "src=" Http://qindongliang.iteye.com/images/icon_star.png "alt=" Favorites Code "style= "border:0px;"/>
Index:
Analysis://An analysis can consist of multiple analyzer,tokenizer,filter,char_filter configurations
Analyzer://An analyzer can contain a tokenizer, multiple filter and Char_filter, Position_increment_gap is the distance from the query, the maximum allowable query, the default is 100
MyAnalyzer1:
Type:custom
Tokenizer:mytokenizer1
Filter: [MyTokenFilter1, MyTokenFilter2]
Char_filter: [my_html]
Position_increment_gap:
MyAnalyzer2:
Type:custom
Tokenizer:mytokenizer1
Filter: [MyTokenFilter1, MyTokenFilter2]
Char_filter: [my_html]
Position_increment_gap:
Tokenizer:
MyTokenizer1:
Type:standard
Max_token_length:
MyTokenizer2:
Type:keyword
Max_token_length:
Filter:
MyTokenFilter1:
Type:stop
Stopwords: [Stop1, Stop2, STOP3, STOP4]
MyTokenFilter2:
Type:length
Min: 0
Max:
Char_filter:
My_html:
Type:html_strip
Escaped_tags: [XXX, yyy]
Read_ahead: 1024x768
A more complete case of the word breaker configuration, as in the example above, almost all of the possible components are covered, and we in the actual application, we have to do is to choose the components we need to customize into a word breaker, and then we can use,
The above configuration, we need to configure in the Elasticsearch.yml file, globally valid, then we can reference and use it in a static mapping or dynamic mapping.
Reference Links:
Https://www.elastic.co/guide/en/elasticsearch/reference/2.1/analysis-custom-analyzer.html
This article is from the "7936494" blog, please be sure to keep this source http://7946494.blog.51cto.com/7936494/1716107
Elasticsearch in the word breaker component configuration