Redis's performance fantasies and brutal realities

Source: Internet
Author: User
Tags benchmark redis version

Redis's performance fantasies and brutal realities

2011, the choice of Redis as the primary memory data storage, the main attraction to me is that it provides a variety of basic data structure can be very convenient to achieve business needs. On the other hand, it is more concerned about whether its performance is sufficient to support, after all, Redis is a relatively new open source products. But the Redis website claims it's a high-performance storage that provides multiple data structures, and we're still imagining it.

Fantasy

To understand the performance of Redis, let's look at the official benchmark performance test data, with a bottom in mind.

2.4.2Using the TCP loopbackPayload size = 256 bytes  测试结果SET: 198412.69/sGET: 198019.80/s

This data just a bit more than expected, but looked at the test premise is to circumvent the network overhead, Client and Server all in the machine. The actual use of the scene must be to go to the network, and the use of the client library is also different. But the official reference data at the time gave us a lot of hope for Redis's performance.

In addition, the official document also mentions, in the LAN environment, as long as the transmission of packets not more than one MTU (approximately bytes under Ethernet), then for the 10, 100, Bytes different packet size processing capacity of the actual result is similar. The relationship between throughput and data size is visible on the official website below.

Verify

Based on our real-world usage scenarios, we built a performance verification environment and made a validation test as follows (the data comes from a colleague @kusix the test report of the year, thanks).

2.4.1Jmeter version 2.4Network 1000MbPayload size = 100 bytes 测试结果SET: 32643.4/sGET: 32478.8/s

In the experimental environment to get the test data to the feeling and the official bad, this is because there is the network and the client library integrated influence so there is no actual horizontal comparison. The measured data of this experimental environment is only indicative for our real production environment. In the experimental environment, the single Redis instance runs stably and the single-core CPU utilization fluctuates between 70% and 80%. In addition to testing the package of bytes, it also measured 1k, 10k and 100k packages of different sizes, as shown in:

Admittedly, the 1k is basically a turning point in Redis performance, which is consistent with the official map from the perspective of the trend.

Reality

Based on laboratory test data and actual business volume, Redis shards are used to assume greater throughput in reality. A single Redis shard a day of ops fluctuations between 20k~30k, single-core CPU utilization fluctuates between 40% ~ 80%, as in.

This is close to the test results from the original lab environment, and the Redis version used in the current production environment has been upgraded to 2.8. If the traffic spikes continue to increase, it looks like a single Redis shard has about 20% margin to the single-instance limit. The possible solution is to continue to increase the number of shards to apportion the pressure on individual shards, provided it is easy to add shards without impacting the business system. This is the real brutal reality test of using Redis.

Cruel

Redis is a good thing, provides a lot of useful features, and most of the implementations are both reliable and efficient (except for master-slave replication). So at first we made a naïve usage error: Put all the different types of data in a set of Redis clusters.

    • User state data for long life cycles
    • Temporarily cache data
    • pipelining data for Background statistics

The problem is that when you want to expand the Shard, the client Hash map changes, which is to migrate the data. With all the data in a set of Redis, it's troublesome to separate them, and each Redis instance is a tens key.

Another problem is the bottleneck caused by the performance limit of a single Redis. As the CPU's single-core frequency has developed into a bottleneck, all in the development of multicore, a PC Server generally 24 or 32 cores. But Redis's single-threaded design mechanism can only take advantage of one core, resulting in the maximum processing power of a single-core CPU being the ceiling of the Redis single-instance processing capability.

To give a specific case, the new features on-line and a little worry, so did a switch on the Redis, all applications can be easily shared. Determine whether a feature is enabled by reading the switch key in Redis and making a judgment on each request. What's the problem here? This key can only be placed on one instance, and all traffic goes to this Redis get, causing the Shard instance to be stressed. and its limit in our environment but 40,000 OPS, this ceiling is not high.

Summarize

Recognize the cruelty of reality, understand the real performance of Redis in your environment, distinguish between fantasy and reality. We can really consider how to make reasonable use of Redis's multifunctional features, and effectively evade its weaknesses, and then give some of the use of REDIS recommendations:

-Classification of Redis clusters according to the nature of the data; My experience is divided into three categories: cache, buffer, and DB
-Cache: Temporary caching of data, plus shard expansion is easy, generally no need to persist.
-buffer: Used as buffers to smooth the write operation of the back-end database, there may be persistence requirements depending on the data importance.
-DB: Alternative database usage, with persistence requirements.

    • Avoid placing hotspot key on a single instance.
    • Redis used by different sub-applications or services under the same system will also be separated from

In addition, there is a view that as a buffer Memcache more appropriate, here can be independent analysis of the pros and cons of it. Memcache is designed to be multi-threaded, so the use of a single instance on a multicore machine is more efficient than the CPU, so it has higher performance ceilings. To achieve the same effect, you may need to deploy 32 Redis instances for a 32-core machine, which is also a burden on operations.

In addition to this, Redis also has a 10k problem, when the cache data is greater than 10k (as a static page cache, it is possible to exceed this size) latency will increase significantly, this is a single-threaded mechanism problems. If your application business volume is far from the performance ceiling of Redis and there is no 10k requirement, it is also reasonable to use Redis as a cache, allowing applications to be much less dependent on an external technology stack. Finally, figure out what your application needs at this stage, whether it's a variety of data structures and capabilities, better scalability or more sensitive performance requirements, and then choose the right tool. Don't just see the benchmark performance data, and cheer up.

The other, the Redis author @antirez Pretty confident about his products and technology. When someone criticizes Redis, he is going to jump out of his blog and add it to the comments. For example, some people say that the Redis feature is easy to use, but also easy to misuse, the author ran out to explain that my design is for each of the different scenarios, you use the wrong blame me, blame me. Some people say that the cache scene Memcache more appropriate than Redis, the author also wrote a special article to illustrate, probably Memcache some Redis have, it does not have me. Of course, I finally admit that multithreading is not, but thinking about adding threads for Redis I/O, one thread per Client, just like Memcache, can't wait to develop and test, sealing so the critic's mouth.

Redis has been adding new features and optimization improvements over the years, allowing it to become more flexible and more adaptable, while also allowing us to think more carefully when it comes to use, not what it has, but what you need and what you choose.

This is the first to come here, and then you'll write about the topic of Redis extensions.

Reference

[1] Antirez. Redis documentation.
[2] Antirez. Clarifications about Redis and Memcached.
[3] Antirez. Lazy Redis is better redis.
[4] Antirez. On Redis, Memcached, speed, benchmarks and the toilet.
[5] Antirez. An update on the Memcached/redis benchmark.
[6] Dormando. Redis VS Memcached (slightly better bench).
[7] Mike Perham. Storing Data with Redis.
[8] a gentle knife. Common performance issues and workarounds for Redis.

Http://www.cnblogs.com/mindwind/p/5067905.html

Redis's performance fantasies and brutal realities

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.