Recently some people in the introduction of the scheme, often appear the word redis, so many small partners Baidu finished Redis also think it is a cache, and then the project inside the data into the finished, and even for example, the entity attributes split into Redis hash inside the strange usage and so on! What is the reason? We think that Redis fire, using the Redis project is tall, so no matter 3,721, the project with a strong plug a use! What I want to say here is that you know why Redis is so hot, how should it be used? Below with my humble build, simple analysis:
One, Redis performance
Redis is a memory-based hash structure of the cache-type db, compared to traditional databases such as MySQL performance, read operations in the 1k or so when the data is basically 10-100 times the difference between the performance of the write, the difference is greater, the following are some test data
By testing the set and get commands of Redis, the performance of Redis can be up to 2W per second in single thread, and the read performance of MySQL is about 5000 per second by testing MySQL's Select and insert and DELETE statements. Write performance can reach 3000 per second, reading performance is basically twice times the write performance. And the above test is based on Redis single-instance, single-connection situation, if the number of CPU cores to increase the Redis instance, and using pie and multi-connection, this data can be easily another order of magnitude ~ Also see some other people on the Internet to publish some Redis performance tests, For example: www.sohu.com/a/29865580_219700.
Second, Redis cache
The above analysis of Redis performance is very high, basically with the machine configuration completely amputated traditional SQL, even NoSQL MongoDB and so on. Even if Redis is just a distributed cache, or a distributed cache database, Redis must not be able to keep a few t-pieces of data like traditional data, usually to radiate data or small volumes of data, and other data to be placed in SQL DB using a queue through a background service. In addition, according to the characteristics of Redis, it is recommended that the server CPU core number should be 1/4, each instance of the memory of the most 1/2; if 24 core 120G server, it is recommended to deploy 18 Reids instances, 5G per instance of memory, the actual use of not more than 3G of data ~ Reids is the cache inherited the advantages and disadvantages of caching, high performance is the advantage, disadvantage: cache penetration, Cache avalanche.
1. Cache penetration: Cache penetration refers to querying a certain non-existent data, because the cache is not hit when the passive write, and for fault-tolerant consideration, if the data from the storage layer is not written to the cache, which will cause this non-existent data every request to the storage layer to query, lost the meaning of the cache. When the traffic is large, the db may be hung up.
The solution is to cache the return value from the DB, reheat it according to the actual situation, and cache it for a few minutes if the db return is empty.
2. Cache Avalanche: When we set the cache with the same expiration time or the cache server for some reason can not be used, resulting in the slow existence of a time at the same time failure, request all forwarded to the db,db instantaneous pressure avalanche.
Workaround expires by adding a range of random values, using Redis Sentinel and Redis Cluster for high availability, and adding a shorter-lived native cache to solve the problem of Redis distributed cache repair.
In the event of a cache pass-through or a cache avalanche, it is recommended that queues be used to queue, a large number of requests to be rejected, and distributed mutexes to prevent back-end data services from being impacted, preventing existing data from having problems.
Iii. Summary
Redis is powerful, whether it's a sentinel cluster or a REDIS cluster-mode cluster, but be sure to understand Redis for better use ~