Related knowledge points for Redis

Source: Internet
Author: User
Tags allkeys redis cluster

1. Why Use Redis

Using Redis is primarily a two-point consideration: performance and concurrency. Of course, Redis also has other functions that can do distributed locks, but if it is just for the other functions of the distributed lock, there are completely other middleware (such as zookpeer, etc.) instead, it is not necessary to use Redis. Therefore, this problem is mainly from the performance and concurrency of two angles to answer.

Answer: As shown below, divided into two points
(a) Performance
We are particularly fit to put running results into the cache when we encounter SQL that takes a long time to execute and does not change frequently. In this way, subsequent requests are read in the cache, allowing the request to respond quickly.

Off topic: Suddenly want to talk about this rapid response criteria. In fact, depending on the interaction effect, this response time does not have a fixed standard. But once someone told me: "In the ideal state, our page jumps need to be resolved in an instant, for the page operation needs to be resolved in an instant." In addition, the time-consuming operation of more than one finger should have a progress prompt, and can be stopped or canceled at any time, so as to give the user the best experience.

(ii) concurrent
In the case of large concurrency, all requests have direct access to the database, and a connection exception occurs in the database. At this point, you need to use Redis to do a buffering operation so that the request accesses the Redis first rather than accessing the database directly.

2. What are the drawbacks of using Redis?

Analysis: We use redis so long, this problem is necessary to understand, basically use Redis will encounter some problems, common also a few.
Answer: The main question is four
(i) Cache and database double write consistency issues
(ii) Cache avalanche issues
(iii) Cache breakdown problem
(iv) Caching of concurrent competition issues

3, single-threaded redis why so fast

Analysis: This problem is actually a review of the internal mechanism of Redis. Many people don't really know that Redis is a single-threaded work model.
Answer: The main is the following three points
(i) Pure memory operation
(ii) Single-threaded operation avoids frequent context switching
(iii) Use of non-blocking I/O multiplexing mechanisms

In simple terms, it is. When our redis-client is operating, it produces sockets with different event types. On the server side, there is a i/0 multiplexing program that is placed in the queue. Then, the file event dispatcher, in turn, goes to the queue and forwards it to a different event handler.

It is necessary to note that this I/O multiplexing mechanism, Redis also provides a select, Epoll, Evport, kqueue and other multiplexed function library, you can understand.

4. Redis data types, and usage scenarios for each data type

Answer: Altogether five kinds
A String
This is really nothing to say, the most common set/get operation, value can be either a string or a number. Generally do some caching of complex counting functions.
(b) Hash
Value here is the structure of the object, the more convenient is to manipulate one of the fields. Bloggers in the single sign-on, is to use this data structure to store user information, to Cookieid as key, set 30 minutes for the cache expiration time, can be a good simulation of the effect similar to the session.
(c) List
Using the data structure of the list, you can do simple Message Queuing functions. Another is that you can use the Lrange command to do a Redis-based paging function, excellent performance and user experience.
(d) Set
Because set stacks are a collection of distinct values. So you can do the function of global de-weight. Why not use the set that comes with the JVM to go heavy? Because our system is generally a cluster deployment, the use of the JVM's own set, it is troublesome to do a global weight, and then a public service, too much trouble.
In addition, is the use of intersection, and set, difference sets and other operations, you can calculate the common preferences, all the preferences, their own unique preferences and other functions.
(v) Sorted set
Sorted set has a weighted parameter score, and the elements in the collection can be arranged by score. Can do leaderboard application, take top n operation. The sorted set can be used for time-lapse tasks. The last application is to do a range lookup.

5. Redis expiration Policy and memory elimination mechanism

Analysis: This problem is actually very important, in the end Redis has no use at home, this problem can be seen. For example, you can only save 5G of Redis data, but you write 10G, that will delete 5G of data. How to delete, this question thought about it? Also, your data has set the expiration time, but the time is up, the memory occupancy rate is relatively high, have you thought about the reason?
Reply:
Redis uses a periodic delete + lazy delete policy.
Why not delete the policy periodically?
Timer Delete, with a timer to monitor key, expired is automatically deleted. Although memory is released in a timely manner, it consumes CPU resources. In the case of large concurrent requests, the CPU would have to apply the time to process the request instead of deleting the key, so this policy is not adopted.
How does the periodic delete + lazy Delete work?
Periodically delete, redis default each 100ms check, whether there is an expired key, have expired key is deleted. It is necessary to note that Redis is not each 100ms will be all key check once, but random extraction to check (if every 100ms, all key check, Redis is not stuck). Therefore, if you only take a periodic delete policy, it will cause many keys to not be deleted in time.
Thus, lazy deletion comes in handy. In other words, when you get a key, Redis checks to see if the key expires after setting an expiration date. If it expires, it will be deleted at this time.
Do you have any other problems with regular delete + lazy deletion?
No, if you delete the key without deleting it periodically. Then you do not immediately request key, which means that the lazy delete does not take effect. In this way, Redis's memory is getting higher. Then you should adopt a memory-culling mechanism.
One line configuration in redis.conf

# Maxmemory-policy VOLATILE-LRU

  

This configuration is a memory elimination strategy (what, you didn't match?) Take a good look at yourself)
1) Noeviction: When the memory is not sufficient to accommodate the newly written data, the new write operation will error. No one should use it.
2) Allkeys-lru: When the memory is not enough to accommodate the newly written data, in the key space, remove the least recently used key. Recommended, currently in use in this project.
3) Allkeys-random: When there is not enough memory to accommodate the newly written data, a key is randomly removed in the key space. It should not be used, you do not delete the minimum use of key, to randomly delete.
4) Volatile-lru: When the memory is not sufficient to accommodate the newly written data, in the key space where the expiration time is set, remove the least recently used key. This is generally the case when Redis is both cached and persisted for storage. Not recommended
5) Volatile-random: When the memory is not sufficient to accommodate the newly written data, a key is randomly removed in the keyed space where the expiration time has been set. Still not recommended
6) Volatile-ttl: When the memory is not sufficient to accommodate the newly written data, the key that has an earlier expiration time is removed in the keyed space where the expiration time is set. Not recommended
PS: If the expire key is not set, the prerequisites are not met (prerequisites); So Volatile-lru, Volatile-random and Volatile-ttl the behavior of the strategy, and noeviction (not delete) basically the same.

6, Redis and database double write consistency problem

Analysis: The consistency problem is distributed frequently, and can be divided into final consistency and strong consistency. The database and the cache double write, there will inevitably be inconsistent issues. Answer this question, first understand a premise. Is that if you have strong consistency requirements for your data, you cannot put the cache. What we do can only guarantee eventual consistency. In addition, the solution we do is fundamentally, we can only say that the probability of reducing the occurrence of inconsistencies can not be completely avoided. Therefore, there are strong consistency requirements of the data, can not be put cache.
First, take the correct update strategy, first update the database, and then delete the cache. Second, because there may be problems with the deletion of cache failures, provide a compensation measure, such as the use of Message Queuing.

7. How to deal with cache penetration and cache avalanche issues

Analysis: These two questions, to say the truth, the general small and medium-sized traditional software enterprises, it is difficult to meet this problem. If there are large concurrent projects, the traffic is about millions of. These two problems must be deeply considered.
Answer: as shown below
The cache penetrates, that is, the hacker intentionally requests the data that does not exist in the cache, causes all requests to godless to the database, thus the database connection exception.
Solution:
(a) The use of mutual exclusion lock, cache failure, the first to obtain a lock, get locked, and then to request the database. Do not get a lock, then hibernate for some time retry
(b) Adopting an asynchronous update strategy, regardless of whether or not the key is taken to a value, is returned directly. A cache expiration time is maintained in value values, and if the cache expires, asynchronously takes a thread to read the database and update the cache. Cache warming is required (the cache is loaded before the project starts).
(iii) Provide an interception mechanism that can quickly determine whether a request is valid, for example, by using a fabric filter, to maintain a series of valid keys internally. Quickly determine whether the requested key is legally valid. If it is not legal, it is returned directly.
Cache Avalanche, that is, cache the same time a large area of failure, this time another wave of requests, resulting requests are godless to the database, resulting in a database connection exception.
Solution:
(a) The expiration time of the cache, plus a random value, to avoid collective failure.
(ii) The mutex is used, but the throughput of the scheme is significantly reduced.
(iii) dual cache. We have two caches, cache a and cache B. Cache A has an expiration time of 20 minutes and Cache B has no expiration time. Do the cache warm-up operation yourself. Then subdivide the following few dots

    • I read the database from cache A, there is a direct return
    • II A does not have data, reads data directly from B, returns directly, and asynchronously initiates an update thread.
    • The III update thread updates both cache a and cache B.
8. How to solve the concurrent competition key problem of Redis

Analysis: The problem is roughly, there are multiple subsystems to set a key. What should we pay attention to at this time? Have you ever thought about it? Need to explain, Bo Master in advance Baidu a bit, found the answer is basically recommended using REDIS transaction mechanism. Bloggers do not recommend the use of Redis's transaction mechanism. Because our production environment, basically is the Redis cluster environment, has done the data shard operation. When you have multiple key operations involved in a transaction, these keys are not necessarily stored on the same redis-server. Therefore, the transaction mechanism of Redis is very chicken.
Answer: as shown below
(1) If you operate on this key, the order is not required
In this case, to prepare a distributed lock, everyone to grab the lock, grab the lock to do set operation can, relatively simple.
(2) If you operate on this key, ask for order
Assuming there is a key1, system a needs to set Key1 to Valuea, System B needs to set Key1 to Valueb, and system C needs to set Key1 to VALUEC.
It is expected that the value values of Key1 vary according to the VALUEA-->VALUEB-->VALUEC order. At such times we need to save a timestamp when the data is written to the database. Suppose the timestamp is as follows

系统A key 1 {valueA  3:00}系统B key 1 {valueB  3:05}系统C key 1 {valueC  3:10}

Well, let's say that system B grabs the lock first and sets the Key1 to {Valueb 3:05}. Then system a grabs the lock and finds that its Valuea timestamp is earlier than the timestamp in the cache, and then does not do the set operation. And so on

Related knowledge points for Redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.