Based on the application background of high performance Nginx server, this paper wants to use the caching technology to reduce the system load and accelerate the response time, thus increasing the throughput of the Web server.
Redis is a distributed memory database, and memcached is a memory caching technique that uses Key-value to access data. The difference is that Redis has a hard disk backup technology, reboot does not lose data, and memcached is pure memory, reboot will lose data.
The adoption of the idea is:
When the Nginx entrance module gets the data request, it filters the content-independent fields, extracts the key fields, and does MD5 compression. Then go to find the cache, see if it hits, kill the words processing the request, return the data and save to the cache. If you hit it, you can return directly. Each request sets a 10-minute expiration time.
The test results are:
Below 100K, memcached is obviously dominant. About Fast 20%~30%. The smaller the packet, the more obvious the advantage.
100k-300k, the two are equally matched.
Above 300K, Redis has a slight advantage.
However, the largest page occupied by memcached is fixed. So when you exceed this maximum page, you are not competent.
In addition, Redis requires additional logic to handle the failed data.
In-memory database: A comparative test of memcached and Redis technology