the difference between Redis and memcached
The difference between Redis and memcached is simple, and most will get the following view:
1 Redis not only supports simple k/v types of data, but also provides storage of data structures such as List,set,hash.
2 Redis supports backup of data, that is, data backup in Master-slave mode.
3 Redis Support Data persistence, you can keep the data in memory on disk, restart can be loaded again to use.
Aside from this, you can delve into the inner structure of redis to observe more essential distinctions and understand the design of Redis.
In Redis, not all data is stored in memory. This is one of the biggest differences compared with memcached. Redis only caches all key information, and if Redis finds that memory usage exceeds a certain threshold, the operation of swap will be triggered, Redis according to "swappability = Age*log (size_in_memory)" Figure out which key corresponding value needs to be swap to disk. The value corresponding to these keys is then persisted to disk and purged in memory. This feature allows Redis to maintain data that exceeds the memory size of its machine itself. Of course, the memory of the machine itself must be able to maintain all key, after all, the data is not swap operation. At the same time, because Redis will swap the data in memory to disk, the main thread of the service and the child thread that carries out the swap share this part of the memory, so if you update the data that requires swap, REDIS will block the operation until the child thread completes the swap operation before it can be modified.
Compare before and after using the Redis-specific memory model:
VM off:300k keys, 4096 bytes values:1.3g used
VM on:300k keys, 4096 bytes values:73m used
VM off:1 million keys, 256 bytes values:430.12m used
VM on:1 million keys, 256 bytes values:160.09m used
VM on:1 million keys, values as large as you want, still:160.09m used
When reading data from Redis, if the value of the read key is not in memory, then Redis will need to load the corresponding data from the swap file and return it to the requester. There is a problem with the I/O thread pool. By default, Redis will be blocked, that is, all the swap files will be loaded before the corresponding. This strategy is relatively small in number of clients and is appropriate for bulk operations. But if you apply Redis to a large web site application, this is obviously not a big concurrency scenario. So Redis run we set the size of the I/O thread pool to perform concurrent operations on the read requests that need to load the corresponding data from the swap file, reducing the blocking time.
If you want to use good redis in a massive data environment, I believe it is essential to understand the memory design and blocking of Redis.
Note: This item currently does not support distributed 3.0 possible later "" "" "" "", "" "" "