How Should Redis be used? (2 )?, Redis Exploitation

Source: Internet
Author: User

How Should Redis be used? (2 )?, Redis Exploitation

In the previous article, I briefly described how to use Keys as a Redis search method. It is true that I feel the power of the community and the advantages of writing articles are many. First of all, thank you for your guidance. I know that using Redis for search is a wrong direction. I think there is no need to post this article, but if I think it is wrong, I will write it out for beginners to think about it.

In this article, I will talk about two methods: key index creation by word segmentation and redis scan command.

Note: The two methods of search are not necessarily feasible. The specific scenario should be tested and measured, and the search using Redis should be carefully considered and tested, or even avoided directly.

In addition, I suggest you take a look at the previous comments. The predecessors gave a lot of experience and some may not understand it. I will first sort out these points:

1. I used StactkExchange. Redis instead of ServiceStack. Redis. For the latter, I think it is a good tool, but it starts to charge 4.0. The 3.9 feature is not very comprehensive and there are some shortcomings.

2. some people suggested GetAll and other methods. I think we should not use StringSet (list) \ StringGet (list) for caching. After all, if the data volume is large, deserialization will take a long time. What do you think of this? I personally think that each record should be a key-value, which should be saved into the entire set; otherwise, what is the efficiency?

3. The Keys fuzzy match in the previous article should be ignored during actual use. Because Keys will cause Redis locks and increase the CPU usage of Redis, the situation is very bad.

Word Segmentation Index Method

This method is the only method that is feasible and conforms to the redis features after my practice and from the perspective given by my predecessors in the previous article. However, the final efficiency is not higher than the memory.

For detailed implementation ideas, refer to the author's blog of Redis (reference 1). The example here is based on UserName and English, and only uses a three-character word segmentation for phrases, for other scenarios, please expand it on your own.

First, we need to perform a word segmentation for all names based on the AutoComplete-based letter search:

Abc => (a, AB, abc)

Form a Set:

Then input a, we will take the content in set a directly, and input AB to take the content of the AB set directly. First, we need to perform word segmentation for the User table name:

var redis = ConnectionMultiplexer.Connect("localhost");var db = redis.GetDatabase();for (var i = 1; i < 4; i++){    var data = dbCon.Lookup<string, int>(string.Format(@"select words, id from (                                    select Row_number() over (partition by words order by name) as rn,id,words from (                                        select  id, SUBSTRING(name, 1, {0}) as words, name from User                                     ) as t                                    ) t2 where rn <= {1} and words != '' and words is not null", i, 20));    data.ForEach((key, item) =>      {         db.SetAdd("capqueen:Cache:user:" + key.ToLower(), item.Select<int, RedisValue>(j => j).ToArray());      });}

 

Step 1: use SQL to filter the first 20 pieces of data for each word segmentation by grouping and sorting. Here the OrmLite syntax is used.

Part 2: store the RedisSet. Note that only an index is created and the specific User content is not saved. The effect is as follows:

Then we can implement the following when searching:

Public List <User> SearchWords (string keywords) {var redis = ConnectionMultiplexer. connect ("localhost"); var db = redis. getDatabase (); var result = db. setMembers ("capqueen: Cache: user:" + keywords. toLower (); var users = new List <User> (); if (result. any () {// convert to ids var ids = result. toList (). select <RedisValue, RedisKey> (I => I. toString (); // obtain the value according to keys. The Users var values = db has been saved in advance. stringGet (ids. ToArray (); // construct the List Json to accelerate the parsing of var portsJson = new StringBuilder ("["); values. ToList (). ForEach (item => {if (! String. isNullOrWhiteSpace (item) {portsJson. append (item ). append (",") ;}}); portsJson. append ("]"); users = JsonConvert. deserializeObject <List <User> (portsJson. toString ());}}

 

After actual tests, the writing method is indeed better than the previous Keys, but the performance is still unsatisfactory.

Scan Search Method

This method was found after I checked the Redis documentation, but it is also a test and cannot be used for large-scale query in the production environment.

Scan is divided into SCAN \ HSCAN \ SSCAN \ ZSCAN according to different data structures. For more information, see the documentation. Here we use ZSCAN:

ZSCAN key cursor [MATCH pattern] [COUNT count]

Here, cursor is a cursor of the search iteration. It is not clear yet. pattern is the matching rule. count is the number of records.

 

Since I am using StackExchange. Redis, The zscan method provided by it is:

IEnumerable SortedSetScan (RedisKey key, RedisValue pattern = null, int pageSize = 10, long cursor = 0, int pageOffset = 0, CommandFlags flags = CommandFlags. None );

After using it, I found that pageSize/pageOffset seems to be ineffective. For this reason, I also made a special note on github for the author. He gave me some explanations:

Https://github.com/StackExchang, my English is poor, please take a look.

 

 

Public void CreateTerminalCache (List <User> users) {if (users = null) return; var db = ConnectionMultiplexer. getDatabase (); var sourceData = new List <KeyValuePair <RedisKey, RedisValue> (); // construct a set of var list = users. select (item => {var value = JsonConvert. serializeObject (item); // construct the original data sourceData. add (new KeyValuePair <RedisKey, RedisValue> ("capqueen: users:" + item. id, value); // construct the data return new SortedSetEntry (item. name, item. id) ;}); // Add it to an ordered set using name-id db. sortedSetAdd ("capqueen: users: index", list. toArray (); // Add the port data key-value db. stringSet (sourceData. toArray (), When. always, CommandFlags. none );}

 

The search result is as follows:

Public List <User> GetUserByWord (string words) {var db = ConnectionMultiplexer. getDatabase (); // search var result = db. sortedSetScan ("capqueen: users: index", words + "*", 10, 1, 30, CommandFlags. none ). take (30 ). toList (); var users = new List <User> (); if (result. any () {// convert to ids var ids = result. toList (). select <SortedSetEntry, RedisKey> (I => I. toString (); // obtain value var values = db according to keys. stringGet (ids. toA Rray (); // construct the List Json to accelerate the parsing of var portsJson = new StringBuilder ("["); values. ToList (). ForEach (item => {if (! String. isNullOrWhiteSpace (item) {portsJson. append (item ). append (",") ;}}); portsJson. append ("]"); users = JsonConvert. deserializeObject <List <User> (portsJson. toString ();} return users ;}

 

Summary

In general, I have some knowledge about Redis through such research and guidance from our predecessors. The AutoComplete scenario is really not suitable for using Redis. It can be said that Redis may be used for some searches earlier. We look forward to providing related functions in the future. In the previous article, some of my predecessors gave great comments. I hope you can learn more.

References

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.