If you simply compare the differences between Redis and memcached, most of them will get the following ideas:
1 Redis not only supports simple k/v-type data, but also provides storage of data structures such as List,set,hash.
2 Redis supports backup of data, that is, Master-slave mode of data backup.
3 Redis supports data persistence, which keeps the in-memory data on disk and can be loaded again when it is restarted.
In Redis, not all data is stored in memory all the time. This is one of the biggest differences from memcached (I personally think so).
Redis will only cache all key information, and if Redis finds that memory usage exceeds a certain threshold, it will trigger swap operations, Redis according to "swappability = Age*log (size_in_memory)" Calculates which key corresponds to the value that requires swap to disk. The value corresponding to these keys is then persisted to disk and purged in memory. This feature allows Redis to maintain data that is larger than the memory size of its machine itself. Of course, the memory of the machine itself must be able to maintain all the keys, after all, the data will not be swap operations.
Also, since Redis swaps the in-memory data to disk, the main thread that provides the service and the sub-thread that is doing the swap will share this memory, so if you update the data that needs swap, REDIS will block the operation until the sub-thread completes the swap operation before it can be modified.
You can refer to the comparison before and after using the Redis-specific memory model:
VM off:300k keys, 4096 bytes values:1.3g used
VM on:300k keys, 4096 bytes values:73m used
VM off:1 million keys, bytes values:430.12m used
VM on:1 million keys, bytes values:160.09m used
VM on:1 million keys, values as large as you want, still:160.09m used
When reading data from Redis, if the value of the key being read is not in memory, then Redis needs to load the data from the swap file before returning it to the requester. There is a problem with the I/O thread pool. By default, Redis will be blocked, that is, all swap files will be loaded before the corresponding. This strategy has a small number of clients and is appropriate for batch operations. However, if you apply Redis to a large web site application, this is obviously not sufficient for large concurrency scenarios. So Redis runs we set the size of the I/O thread pool, and concurrently operates on read requests that need to load the corresponding data from the swap file, reducing blocking time.
Redis, Memcache, mongoDB contrast
From the following several dimensions, the Redis, Memcache, MongoDB made a comparison, welcome to shoot Bricks
1. Performance
are relatively high, performance is not a bottleneck for us
In general, the TPS is about the same as Redis and Memcache, more than MongoDB
2, the convenience of the operation
Memcache Data Structure Single
Redis is rich, data manipulation, Redis better, less network IO times
MongoDB supports rich data expression, index, most similar relational database, support query language is very rich
3. Size of memory space and amount of data
Redis has added its own VM features after the 2.0 release, breaking the limits of physical memory; You can set the expiration time for key value (similar to memcache)
Memcache can modify the maximum available memory, using the LRU algorithm
MongoDB is suitable for large data storage, depends on operating system VM to do memory management, eat memory is also very bad, service not with other services together
4. Availability (single point of issue)
For a single point of problem,
Redis, which relies on clients for distributed reads and writes, and master-slave replication relies on the entire snapshot every time the primary node is reconnected from the node, without incremental replication, due to performance and efficiency issues,
Therefore, the single point problem is more complicated, the automatic sharding is not supported, and the dependent program is required to set the consistent hash mechanism.
An alternative is to use your own proactive replication (multiple storage) instead of Redis's own replication mechanism, or change to incremental replication (you need to implement it yourself), consistency issues and performance tradeoffs
Memcache itself has no data redundancy mechanism, it is not necessary, for fault prevention, relying on mature hash or ring algorithm to solve the single point of failure caused by the jitter problem.
MongoDB supports Master-slave,replicaset (internal using Paxos election algorithm, automatic fault recovery), auto sharding mechanism, blocking the failover and segmentation mechanism to the client.
5. Reliability (persistent)
For data persistence and data recovery,
Redis Support (snapshot, AOF): dependent on snapshots for persistence, AOF enhances reliability while impacting performance
Memcache not supported, usually used in cache, improve performance;
MongoDB supports persistent reliability from the 1.8 release with the binlog approach
6. Data consistency (transactional support)
Memcache in concurrent scenarios, with CAS to ensure consistency
Redis transaction support is weak and can only guarantee continuous execution of each operation in a transaction
MongoDB does not support transactions
7. Data analysis
MongoDB has built-in data analysis capabilities (MapReduce), others do not support
8. Application Scenario
Redis: More performance operations and calculations with smaller data volumes
Memcache: Used to reduce database load in dynamic system, improve performance, cache, improve performance (suitable for read and write less, for a large amount of data, you can use sharding)
MongoDB: The main solution to the massive data access efficiency problem
The difference between Redis and memcached (summary)