Single-threaded and multithreaded with life examples

Source: Internet
Author: User

1. Objectives of the program design

It seems to me that the goal of a good program is to balance performance with the user experience from a procedural standpoint. Of course, whether a program can meet the needs of users for the moment, this is a business-level problem, we only discuss the program itself. Expand around two points for performance and user experience.

Performance: High-performance programs should be equivalent to CPU utilization, the higher the CPU utilization (always working, no idle time), the higher the performance of the program.
Experience: The experience here is not just how beautiful the interface, how handy, the experience here refers to the response speed of the program, the faster the response, the better the user experience.

Let's discuss these two points in a variety of models.

2. Single-threaded multi-task non-blocking

To live in the dining room to eat the scene as a metaphor, suppose there is such a scene, small A, small B, small C in the window in turn line to play rice. Suppose the window is responsible for the meal, it takes 1 seconds for the aunt to play a dish. If small a requires 2 dishes, small B requires 3 dishes, and small C requires 2 dishes. As follows:

Aunty (CPU): 1 seconds to make a dish
Small A: 2 dishes
Small B: 3 dishes
Small C: 2 dishes

So in this model will be all services done aunt takes time 2 + 3 + 2 = 7 seconds
aunty = CPU
Small A, small B, small c = Task (here is the concept of a task, indicating that something needs to be done)
This model under the CPU is full load uninterrupted operation, no idle, the user experience is good. Each task in this program is less time-consuming and ideal, and generally less likely to exist.

3. Single-threaded multi-task IO blocking

Make a slight change to the above scene:
Auntie: It takes 1 seconds to make a dish.
Small A: 2 dishes, but forget to bring money, to find classmates sent over, estimated need to wait 5 minutes can be sent to (can be understood as disk IO)
Small B: 3 dishes
Small C: 2 dishes

In this case, small a block here, in fact, small a here took 5 minutes is 300 seconds + 2 dishes of time, that is, 302 seconds, and the CPU is idle 300 seconds, actually work 2 seconds.
All services Done Cost 302 + 3 + 2 = 307 seconds The CPU actually worked for 7 seconds and waited 300 seconds. The clock cycle of the CPU is greatly wasted. The user experience is poor, because when small a is blocked, everyone behind is waiting, and the CPU is actually idle at this time. So there is no blocking in the single thread.

4. Single-threaded multi-task asynchronous IO

Or the above model, join a role: On duty, he is responsible for the first asked whether each person with money, if bring money is allowed to play vegetables, otherwise the money is ready to say.

<1> on duty, small brother asked small a ready to play vegetables, small A said forget money, on the life of small brother said, you put the money ready to say, small a began to prepare (need 300 seconds, from the moment start to remember when).
<2> on duty baby brother asked small B ready to play vegetables, small B said can, Aunt Service small B, time 3 seconds
<3> on duty baby brother asked small c ready to play vegetables, small C said can, Aunt Service small C, time spent 2 seconds
<4> on duty baby brother asked small a ready no, small a said also to wait a while, aunt because no one came to service, in idle state
<5> 300 seconds later, small A is ready, Aunty service small A, time consuming 2 seconds
The whole process is done 300 + 2 = 302 seconds CPU working 7 seconds, idle 295 seconds

The on-duty brother is equivalent to the SELECT function in the Select model, which is responsible for whether the polling task can work and if it can work directly, otherwise it will continue polling. In small a blocked 300 seconds inside, Aunt (CPU) not silly wait, but in the service behind the people, that is, small B and small C, so here and Model 3 is different, here are 5 seconds CPU is working. If there are more people to eat, this model CPU utilization is higher, for example, if there are small d, small e, small f ... The CPU can continue to serve other people during the 300-second period when the small a is blocked. In fact, the life of the small brother polling will also be time-consuming, this time is very little, almost negligible, but if the task is very much, this poll will still affect performance, but the Epoll model has not used polling way, the equivalent of A,B,C will take the initiative to live with the report, said I was ready, can directly play vegetables.

The user experience in this mode is good and the CPU utilization is high (the more tasks the more utilization)

5. Single-threaded multi-task non-blocking, time-consuming calculation

Back to the beginning of the model, as follows:
Auntie: It takes 1 seconds to make a dish.
Small A: 200 dishes
Small B: 3 dishes
Small C: 2 dishes

In order to complete all tasks, it takes 200 + 3 + 2 = 205 seconds, the CPU is not idle, but the user experience is not very good, because obviously the back of the b,c need to wait for the small a 200 seconds, in this case there is no IO blocking, but the task a itself is too CPU-intensive, so that if a single thread in the consumption Operation, it will certainly affect the experience (IO operations or time-consuming computations are time-consuming operations, which can lead to blocking, but the nature of the two causes of blocking is different). Blocking is not allowed in all single-threaded models, and if present, the user experience is very poor, such as in UI programming (Qt,c# Winform) is not allowed to do time-consuming operations on the UI thread, otherwise it will cause the UI interface to be unresponsive. When we write the Nodejs program, the code we write is actually executed in one thread, so there is no blocking operation (of course, the entire NODEJS framework is asynchronous and must be more than one thread).

There are generally 2 types of blocking, one is IO blocking, such as a typical disk operation, in which case the blocking will cause the CPU to idle wait (of course, if Io is blocked in modern operating system, the operating system will suspend the thread causing IO blocking). This blocking situation can be avoided by means of asynchronous Io, which avoids the only single thread in the program being suspended by the operating system. In another case, there are really a lot of computational operations, such as a complex cryptographic algorithm, which really consumes a lot of CPU time, in which case the CPU is not idle, but it is working at full load. This CPU-intensive work is not suitable for single-threaded, although the CPU utilization is high, but the user experience is not very good. In this case, it would be better to use multithreading, for example, if 3 tasks, each task in a thread, that is, there are 3 threads, a task in Threada, b task in threadb, c task in THREADC, so even if a task is computationally large, B, C two tasks where the thread does not have to wait for a task to be completed and then to work, they also have the opportunity to get scheduled, which is done by the operating system. This will not affect the experience by blocking other tasks because one task is computationally heavy.

6. Multi-Threaded threads

We changed the model to a multi-threaded model, we add a role on the basis of model 5, the Administrator (the role of the operating system):
Auntie: 1 seconds to play a dish
small a: 200 dishes
Small B: 3 dishes
Small C: 2 dishes

joined the administrator after the uncle became like this, small a dozen two dishes, the uncle said, you play too many dishes, not because you want to play 200 dishes, so that the students have no chance to play vegetables, you play two dishes after a while, let the back of the students also have the opportunity.
Uncle let small b dozen two dishes, then let small c dozen two dishes (small C finish), and then let small a dozen two dishes (after the completion of a total of 4 dishes), then small B dozen 1 dishes (at this time small b a total dozen 3 dishes, complete), then small a dozen remaining 196 dishes.

CPU Utilization: Very high, aunt in constant work
User experience: Yes, even if small a to play 200 dishes, small B, small c also have the opportunity. Of course, if small a said I was to help the headmaster to play vegetables, to hurry up (thread priority high), that can only be a service end of the small a
Total time:   200 + 3 + 2 + (Uncle Command Schedule of the consumption, including from the small C switch back to little a when, Uncle want to know small a last hit the dish is which two, this time should continue to play what dishes, this is equivalent to the thread context switch overhead and the thread environment to save and restore), so not more threads, the more threads of the time the uncle estimates will be burned, to remember such a state, switch to switch to also consume time.

This model is actually the time-consuming task of small A, divided into multiple parts to perform instead of centralized execution, so small a to complete his task, it may take more time (he also needs to wait for others, the aunt will not always serve him a person, but the aunt for his service time is unchanged), This is a bit of time in exchange for the user experience (small B and small C experience, small a experience may not be so good, but small a is also very time-consuming, so it is not okay to wait a bit)

So what's the difference between IO blocking and CPU time-consuming blocking? The difference is that IO blocking is not CPU-intensive, and CPU time-consuming blocking is CPU-intensive. For example, the above example, small A said forget to take money with the students need to send money, so small a waiting for students to send money, the process of the aunt did not provide services for small A, the process for small a service is his classmates (send money over), actually small a students equivalent to the modern computer system DMA (direct memory operation The process of sending money to a classmate is equivalent to the process of reading data from disk to memory by DMA, which requires no CPU intervention.

Of course, in the era of DMA technology has not occurred, read files from the disk is also required by the CPU to send instructions to read, that is, the need for CPU calculation, applied to the scene here, is the aunt personally run a trip to help small A to get the money.

7. Multi-CPU

Multi-CPU is a more complex problem, how to dispatch multi-CPU? Small A in the first window to play two dishes, and run to the second window to play two dishes how to deal with this situation. Small A in the first window, small B in the second window they want the same dish, but this dish is only enough for one person, then two windows Aunt how to allocate this demand (in fact, it should be the operating system is the administrator uncle to decide how to allocate, that is, multi-core thread synchronization and mutual exclusion)?

Multi-core CPU, multi-threaded scheduling, mutual exclusion, lock and synchronization is relatively more complex, multi-core case is real parallel, at the same time there are multiple threads at the same time running, their competition how to deal with, multiple CPUs synchronization between (multi-CPU cache state consistency) and so on a series of problems.

8. Multi-threading and multi-process

The multithreading described above is actually a multi-threaded scheduling problem, here we talk about multi-threading and multi-process and resource allocation problem. What do you mean, a group of people (multiple threads) eat on a table (process), they will involve some problems, such as multiple people may be sandwiched a dish (competition), A and b at the same time see a piece of meat inside the plate, while sticking out chopsticks to clip, a first clip away, b late a little stretched to the plate when the time has Critical resources, mutual exclusion), there is a dessert need to eat with bread meat. A clip of meat, b sandwiched a bun, a need B of the bun, B needs a meat, they deadlocked no one will budge (deadlock).

The sharing of resources between multiple threads is very convenient because they share the resource space of the process (on a table), but need to pay attention to a series of problems, competition, deadlock, synchronization and so on. If you open a table next to it (process). It is not convenient to talk between tables, to hand things (interprocess communication), and to open a table is more expensive than to add a person to a table. The number of people on the other table is unlikely to increase indefinitely, the capacity of the table is limited and so many persons (the thread handle of the process is limited). One table is broken. The dining situation of the person on the other table is not affected (the process is independent, one process crashes without affecting the other), and a person on the table needs to be sent to the hospital, which is estimated to be scattered (the thread hangs up and the whole process hangs). So multi-threading and multi-process are each have advantages and disadvantages, can not generalize.

Description: The metaphor for multi-threaded tables is inspired by the user [Pansz], but the metaphor does not seem to explain the thread synchronization situation.

9. Summary

Single-threaded: Suitable for IO asynchronous, can not block, can not have a lot of CPU-consuming calculation. Typical like Nodejs, there are some network programs
Multithreaded program: For CPU-intensive programs

Single-threaded and multithreaded with life examples

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.