At present, the word splitting performance is poor, with only 1.65 m/s. My colleagues made some optimization adjustments without changing the main algorithm, to 3.52 m/s, but the performance improvement is still not obvious enough. I feel that I have to solve the following problems:
1. in search, keyword word segmentation is also performed at multiple granularities, and sloppyphrase is used separately. Finally, or, because of the low word frequency, IDF is large, and they have a higher priority in sorting, meets expectations. It is not realistic to make only one granularity cut method and to find it in the index. The modified method is not demanding on the index time, and the overhead is not very large. It is possible to split the word with three granularities, however, if sorting is not considered, a higher sorting method is required. The check and recall rates are both higher. In this way, the index word segmentation focuses on cutting words at different granularities. You don't have to consider complicated and strange logic such as merging after cutting, so you can pursue higher performance.
2. The trie tree (or FST) is used for retrieval in a unified manner. The trie tree can also be updated regularly to prevent prefix queries using hash. The trie tree is equivalent to a state machine with better performance.
3. Set up performance, accuracy, and recall rate indicators. The current algorithm is complex. If other algorithms are better integrated, they can be replaced.
4. Currently, increment gap is not considered and is not rigorous enough. errors may occur.