Redis is a single-threaded process for all requests, and the usual classic is actually recommended in the opposite way, then single-threaded serial processing, why can still do it quickly? The answer is as follows,
where thread switching and locking are not the main factors affecting the performance of the view and the general answer is different: Yang Haipo
Links: https://www.zhihu.com/question/19764056/answer/20241839
Source: Know
Copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please specify the source.
Pure memory Database, if simply key-value, memory is not a bottleneck. In general, hash lookups can reach orders of magnitude millions of times per second.
The bottleneck is on network IO.
Depending on the 10000/s you measured, the client and Redis should be deployed on two different machines and request Redis in a synchronous manner. Each request needs to send the request over the network to the machine where the Redis resides, and then wait for Redis to return the data. Most of the time is consumed in network transmissions.
If Redis and the client are placed on the same machine, the network latency will be smaller, typically hitting 60,000 times per second or even higher, depending on the performance of the machine.
Locks are not the main factors that affect performance. The thread Lock (Mutex_lock) only degrades performance in the event of a conflict, and under normal circumstances, the probability of encountering a conflict is low. If it is simply locking, release the lock speed is very fast, every second million no problem. Memcache internal use of a large number of locks, and did not see performance degradation.
Threads are also not an important factor in throughput. As a 1th, in general, the program handles memory data much faster than the network card receives. The advantage of using threading is that multiple connections can be processed at the same time, and in extreme cases, the response can be increased.
This can only be done using Epoll or libevent, for asynchronous non-blocking IO programming. Corresponding to this is the synchronous blocking IO programming, using multi-process or multi-threaded multi-connection processing, such as Apache. In general, asynchronous non-blocking IO model performance is much higher than the synchronous blocking IO model, which can be used to reference the performance of Nginx versus Apache.
Libevent is not slower than Redis's own implementation of Ae_event, the code is that ae_event only realize the functionality required for Redis, and Libevent has more features, such as a faster timer, buffer event model, and even comes with a DN S, HTTP protocol processing. And libevent is more generic, and Redis is focused on the Linux platform.
Finally answer the main question, where are you going?
1. Pure memory operation
2, asynchronous non-blocking IO
Why is a single-threaded redis so fast?