Analysis and AnalyzerAnalysis
1, the text participle, divided into suitable to do inverted index words.
2, standardization of Words (normalizing), such as uniform case, abbreviation conversion. This is done to improve the ability to search.
Analyzer:
Analyzer analyzes the document. An analyzer consists of three parts:
- Character Filter: Filters. It does this by collating the text before the word breaker, such as removing HTML tags, converting & to and.
- Tokenizer: Word breaker. Divides the text into phrases that can organize the inverted index.
- Token filter: Word filter. Its functions include: Convert case, delete stop word, add corresponding synonym.
Build-in Analyzer:Standard Analyzer:
A standard word breaker that is based on word (word) boundaries and filters out most punctuation marks. In the end, it turns the divided word into lowercase.
Simple Analyzer:
Whenever you encounter a character that is not a letter, make a word and lowercase the word.
Whitespace Analyzer:
Based on whitespace, it does not lowercase words.
N-gram Analyzer:
N-gram participle can be used to search the fuzzy query method, can be used to find and also can be used to implement the auto-completion function. This is because N-gram will divide words like "abcd" into: AB, ABC, ABCD, BC, BCD, CD.
If you use N-gram participle to set up an inverted index will largely touch the index size, if you just use the prefix complement can select Edge N-gram.
Es built-in N-gram Tokenizer can be configured to implement N-gram Analyzer as follows:
"Settings": {"Analysis": {"Analyzer": {" ngram_2_10": { "tokenizer": "Ngram_2_10_tokenizer" } }, "Tokenizer": { "Ngram_2_10_tokenizer": { "type": "Ngram", "Min_gram": "2", "Max_gram" : "Ten", "Token_chars": ["letter", "Digit"]}}}
ES0.2 Analysis and Analyzer