1.Redis faster than Memcache
Redis has a transaction, persistence mechanism, but it can also do high performance, for the following reasons:
- Libevent. Unlike memcached, Redis does not have a choice of libevent. Libevent to cater to versatility, the code is huge (currently Redis code is less than 1/3 of Libevent) and has sacrificed a lot of performance on a particular platform. Redis has implemented its own Epoll event loop (4) with two file modifications in the libevent. Many developers in the industry have also suggested that Redis use another libevent high-performance alternative to Libev, but the author insists that Redis should be small and dependent. An impressive detail is that the Redis does not need to be executed before it is compiled./configure.
- CAS issue. CAS is a convenient way to prevent competition from modifying resources in memcached. CAS implementations need to set a hidden CAS token,cas equivalent value version number for each cache key, each time the set token needs to be incremented, resulting in a dual overhead of CPU and memory, although these costs are small, but to stand-alone 10g+ The cache and the QPS tens of thousands of these costs will bring some minor performance differences to each other.
2.VM Features
Starting with version 2.0, Redis provides virtual memory support.
Redis's data is all put in memory for high-speed performance, but it also brings some unreasonable. For example, a medium-sized website has 1 million registered users, and if the data is to be stored using Redis, the capacity of the memory must be able to accommodate the 1 million users. But the business situation is 1 million users only 50,000 active users, 1 weeks to visit 1 times also only 150,000 users, so all 1 million users of data are placed in memory unreasonable, RAM needs to pay for cold data.
This is very similar to the operating system, where all the data accessed by the operating system is in memory, but if the physical memory does not hold the new data, the operating system intelligently exchanges some of the long-running data that is not accessed to disk, leaving room for new applications. Modern operating systems provide applications with not physical memory, but virtual memory concepts.
Redis VMS Follow the previous Epoll implementation of the idea is still self-fulfilling. However, the introduction of the previous operating system mentioned that the OS can also automatically help the program to achieve cold and hot data separation, Redis only need the OS to request a large memory, the OS will automatically put hot data into physical memory, cold data exchange to the hard disk, another well-known "understanding of modern operating system (3)" Varnish is the realization of this, but also achieved a very successful effect.
Author Antirez Several reasons for explaining why to implement the VM himself (6). The main OS VM swap out is based on the page concept, such as OS VM1 page is 4K, 4K as long as there is an element even if only 1 bytes are accessed, the page will not be swap, the same reason, read a byte may be swapped into 4K of useless memory. Redis's own implementation can achieve control of the granularity of the change in. In addition to accessing the OS Swap memory area, the block process is also one of the reasons why Redis is implementing its own VMS.
However, since it still involves disk IO, it is no longer advocated after the 2.6 release, performance issues.
3. Multi-use hash type
As a key value exists, many developers naturally use the Set/get way to use Redis, in fact, this is not the most optimized way to use. In particular, in cases where VMs are not enabled, all Redis data needs to be put into memory, which is especially important for saving memory.
If a key-value unit needs to occupy a minimum of 512 bytes, it takes 512 bytes to save only one byte. At this time there is a design mode, you can reuse the key, a few key-value into a key, value is stored as a set, so the same 512 bytes will be stored 10-100 times the capacity.
4. Use aof instead of snapshot
Redis has two storage methods, the default is the snapshot way, the implementation method is to periodically persist the memory snapshot (snapshot) to the hard disk, the disadvantage is that if the crash occurs after persistence, it will lose a piece of data. As a result of the perfectionist's push, the author added the AoF way. AoF that is append only mode, the Operation command is saved to the log file while writing the memory data, in a system with tens of thousands of concurrent changes, the command log is a very large amount of data, the management maintenance costs are very high, recovery rebuild time will be very long, which leads to the loss of AOF high-availability intention. What's more, Redis is a memory data structure model, and all of the advantages are based on the efficient atomic operation of complex memory structures, so that aof is a very uncoordinated part.
In fact, the main purpose of AOF is data reliability and high availability, in Redis there is another way to achieve the purpose: Replication. Because of the high performance of Redis, there is virtually no delay in replication. This is achieved by preventing single points of failure and achieving high availability.
Some of the accumulated Redis