Why is the Redis a single thread? Why is the __redis a single thread?

Source: Internet
Author: User
Tags redis


There has been a misunderstanding, that: High-performance servers must be multi-threaded to achieve



The reason is very simple because of misunderstanding two causes: multithreading must be more efficient than single-threaded. Fact



In saying this beforehand hope everybody can have a knowledge of CPU, memory, hard disk speed, this may understand a bit more profoundly, do not understand friend Point: CPU is much faster than memory and hard disk



The Redis core is that if all my data is in memory, I single-threaded to operate is the most efficient, why, because the essence of multithreading is the CPU to simulate the case of multiple threads, this simulated case has a price, is the context of the switch, for a memory system, It has the highest efficiency without context switching. Redis a single CPU to bind a piece of memory data, and then for this piece of memory data for multiple reads and writes, is done on a CPU, so it is single-threaded processing this matter. In the case of memory, this scheme is the best solution--Ali Shen Yu






Because one CPU context switch is about 1500ns.



Reads 1MB of contiguous data from memory, takes about 250US, and assumes that 1MB of data is read by multiple threads 1000 times, then there are 1000 times the context of the transition,



Then there are 1500ns * 1000 = 1500US, I single-threaded read 1MB data to 250US, you light time context of the switch is used in the 1500US, I do not count you every time to read a little data,






When to use a multithreaded solution.



The answer is: lower-level storage and other slow-speed situations. such as disk






Memory is a very high IOPS system, because I want to apply a piece of memory to apply a piece of memory, destroy a piece of memory I will destroy a piece of memory, memory application and destruction is very easy. And the memory can be dynamically applied to the size.






The characteristics of the disk are: IPOs is very low, but the throughput is very high. This means that a large number of read and write operations must be saved together, and then submitted to disk, the highest performance. Why, then?






If I have a transaction group operation (that is, several transaction requests that have already been separated, such as write read/writing, five operations together), in memory, because the IOPS is very high, I can do it one by one, but if there is such a request on disk,






My first write was done this way: I first addressed it in the hard drive, it cost about 10ms, and then I read a data that could cost 1ms then I would do it again (negligible), then write back to the hard drive again 10ms, a total of 21ms






The second operation to read spent 10ms, the third is written to spend 21ms, and then I read 10ms, write 21ms, five requests a total of 83ms, which is the ideal case, this if in memory, probably 1ms.






So for the disk, its throughput is so large, the best solution is definitely that I put n requests in a buff, and then together to submit.



The method is asynchronous: the thread that requests and processes is not bound, the requested thread puts the request in a buff, and waits for buff to be full, and the processing thread then handles the buff. The buff is then unified to write to the disk, or read the disk, which is the highest efficiency. Does the IO in Java do that?






For slow devices, this is the best way to do this, and slow devices have disks, networks, SSDs, etc.



Multithreading, asynchronous way of dealing with these problems is very common, the famous Netty is so dry.






Finally put Redis why is a single thread to say clearly, when using a single thread with multithreading also said clearly, in fact, is also some very simple things, but the foundation is not good, it is really embarrassing ....



Make up the master quotations: to say, why the single core CPU binding a piece of memory is the most efficient



"We can't have the operating system load balanced because we know our own programs better, so we can manually allocate CPU cores to them without too much CPU", by default the CPU kernel is randomly used by a single thread while making system calls, in order to optimize Redis, We can use tools to bind a fixed CPU kernel to a single thread, reducing unnecessary performance losses.



Redis as a single process model, multiple instances are often started on one server in order to take full advantage of multi-core CPUs. In order to reduce the overhead of switching, it is necessary to specify the CPU that it is running for each instance.

On Linux  Taskset can bind a process to a specific CPU. You know your programs better than the operating system, in order to avoid the scheduler's foolish scheduling of your programs or the overhead of avoiding cache failures in multithreaded programs.
 





By the way: Redis bottlenecks in the network ....








Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.