Transferred from: https://laike9m.com/blog/huan-zai-yi-huo-bing-fa-he-bing-xing,61/
still wondering about concurrency and parallelism?
OK, if you're still troubled by the difference between the two words of concurrency (concurrency) and parallel (Parallesim), then this article is for you to read. What is the meaning of this kind of word discrimination? It doesn't really make sense, but there are too many people who are using the two words (such as a teacher in a class). Whether the Chinese or English circle, even if there are countless articles in the discussion of parallel vs concurrency, but very few can speak clearly. It is more frightening to have a person who is not clear about it than to explain it. For example, I randomly found an online explanation:
The former is a logical simultaneous occurrence (simultaneous), while the latter is physically occurring simultaneously.
concurrency (concurrency), also known as co-occurrence, refers to the ability to handle multiple simultaneous activities, and concurrent events do not necessarily occur at the same time.
Parallel (parallelism) refers to the simultaneous occurrence of two concurrent events, with the meaning of concurrency, while concurrency is not necessarily parallel.
A metaphor: concurrency and parallelism is the difference between a person eat three steamed bread and three people at the same time eat three steamed bread.
After seeing it, do you understand? Don't understand, more dizzy. The person who writes this kind of explanation, oneself is also a smattering, but again the blurred image of his mind is taken out to write an article, let the reader read finish but more puzzled. Of course it is possible that he does understand, but writing this kind of writing is not responsible. As for this article, please believe, must be accurate, I also try to explain clearly.
OK, let's get down to the chase,concurrency vs Parallesim
Let's read the following phrase aloud:
"Concurrency" refers to the structure of the program, "parallel" refers to the state of the program runtime
Even if you do not read the detailed explanation, please remember this sentence. Here's a concrete word:
Parallel (Parallesim)
This concept is well understood. The so-called parallel, is the simultaneous implementation of the meaning, without excessive interpretation. Judging if the program is in a parallel state , it is good to see if more than one "work unit" is running at the same time. Therefore, a single thread can never reach a parallel state .
To achieve the parallel state, the simplest is to take advantage of multithreading and multi-process. But Python's multithreading because of the existence of the famous GIL, can not let two threads really "run simultaneously", so in fact, can not reach the parallel state.
concurrency (concurrency)
To understand the concept of concurrency, it must be clear that concurrency refers to the "structure" of a program. When we say that this program is concurrent, in fact, this sentence should be expressed as "this program uses a design that supports concurrency." Well, since concurrency refers to a human-designed structure, what is the program structure called concurrent design?
The standard for proper concurrency design is to allow multiple operations to take place over an overlapping period of time (both tasks can start, run, and complete in overlapping-periods).
There are two key points in this sentence. Let's look at the concept of "(operation) in overlapping time periods". Is it parallel to what we said earlier? Yes, it's not. Parallelism, of course, is performed in overlapping time periods, but another mode of execution also falls within the overlapping time period. This is the co-process.
When using a process, the execution of the program often looks like this:
Task1, Task2 is two different pieces of code, such as two functions, where the black block represents a piece of code being executed. Note that there is only one piece of code executing at any point in time from start to finish, but because Task1 and task2 execute in overlapping time periods, this is a design that supports concurrency. Unlike parallelism, single-core threads can support concurrency.
Often see such a statement, called concurrent execution . Now we can understand it correctly. There are two possible ways:
- What was meant to be said was "parallel execution," but with the wrong word.
- Multiple operations can be performed over overlapping time periods, that is, true parallelism, or a similar execution pattern.
My advice is to not use the word as much as possible, and to cause misunderstandings, especially for those who do not divide parallel. But it is clear that you will be able to distinguish them correctly, so for the sake of simplicity, the word concurrency is used.
The second point is " can" in the "can" in the overlapping period of time two words. "Can" means that the correct concurrent design makes concurrent execution possible, but the program does not necessarily have multiple task execution time periods overlap when it is actually running. For example, our program will open a thread or a process for each task, and when there is only one task, it is clear that there will be no overlap of multiple task execution periods, and when there are multiple tasks, it will appear. Here we see that concurrency does not describe the state of the execution of the program, it describes a design, is the structure of the program, such as the above example "open a thread for each task" design. Concurrent design and the actual execution of the program are not directly related, but the correct concurrency design makes concurrent execution possible. Conversely, if the program is designed to perform a task and then proceed to the next one, it is not concurrent design because it cannot be executed concurrently.
So, how do you implement a design that supports concurrency? Two characters: split .
Concurrent design often requires the process to be disassembled because it is not possible to perform multiple tasks at the same time without splitting. This split can be a parallel split, such as abstraction into a similar task, or it can be non-parallel, such as divided into multiple steps.
Concurrent and parallel relationships
Different Concurrent designs enable Different ways to parallelize.
This sentence comes from the famous talk: Concurrencyare not parallelism. It is concise enough that it does not require too much explanation. But it's always bad to just quote someone, so I'll use the summary of the previous text to illustrate that concurrent execution is possible, while parallelism is a pattern of concurrent execution .
Finally, about concurrency is not parallelism this talk a little more. Ever since this talk came out, I've exploded a bunch of articles that discuss concurrency vs parallelism, and there are no exceptions to this conversation, and even some of the articles are directly illustrated with images from its slide. Like this one:
Think I'm going to explain this picture? NO. The only reason to put this picture is the gopher of Meng Meng.
One more feature:
Before I saw that there is a question about why go is popular, there is an answer is "logo Meng" then I laughed.
Seems to be off the topic, continue to say this talk. Like many people, I watched this talk before I started thinking about concurrency vs parallesim issues. I spent quite a lot of time trying to study the gopher of that pile of push cars. In fact, I was more likely to find out the problem through online snippets (such as So's answer) and my own thinking, and talk didn't help much. After a thorough understanding and then looking back at the talk, it's really pretty good, Andrew Gerrand is absolutely deep enough to understand the problem, but not the novice. The biggest problem is that the bunch of gopher examples are not good enough and too complicated. Andrew Gerrand spent a lot of time talking about different concurrent designs, but as a first-time contact, it was too difficult to study the gopher of a push car without figuring out the concurrency parallel differences. "Different Concurrent designs enable Different ways to Parallelize" is a good summary, but only those who have been thoroughly understood, such as me and the readers here, It's just as hard to understand as the Scriptures are to beginners. Summary down a sentence, do not go to the beginning to see this video, do not spend time to study the gopher of the car push. Gopher is Moe, but confusing.
2015.8.14 Update
In fact, my previous understanding still has a mistake. It is mentioned in the article "several recent interviews". Recently bought the "seven weeks Seven concurrency model" This book, found that there are, excerpts from this (English version p3~p4):
Although there ' s a tendency to think that parallelism means multiple cores, modern computers is parallel on many diff Erent levels. The reason why individual cores has been able to get faster every year, until recently, was that they ' ve been using all th OSE extra transistors predicted by Moore's law in parallel, both at the bit and at the instruction level.
Bit-level Parallelism
Why was a 32-bit computer faster than an 8-bit one? Parallelism. If The 8-bit computer wants to add the 32-bit numbers, it has to do it as a sequence of 8-bit operations. By contrast, a 32-bit computer can do it on one step, handling each of the 4 bytes within the 32-bit numbers in parallel. That's why the computing have seen us move from 8-to 16-, 32-, and now 64-bit architectures. The total amount of benefit we'll see from this kind of parallelism have its limits, though, which are why we ' re unlikely to See 128-bit computers soon.
instruction-level Parallelism
Modern CPUs is highly parallel, using techniques like pipelining, Out-of-order execution, and speculative execution.
as programmers, we ' ve mostly been able to ignore this because, despite the fact that the processor have been doing thin GS in parallel under we feet, it ' s carefully maintained the illusion that everything are happening sequentially. This illusion was breaking down, however. Processor designers is no longer able to find ways to increase the speed of a individual core. As we move into a multicore world, we need to start worrying on the fact that instructions aren ' t handled sequentially. We ' ll talk about this more in Memory Visibility, on page?.
Data Parallelism
Data-parallel (sometimes called SIMD, for "single instruction, multiple data") architectures is capable of performing the Same operations on a large quantity of the data in parallel. They ' re not suitable for every type of problem, but they can is extremely effective in the right circumstances. One of the applications that's most amenable to data parallelism is image processing. To increase the brightness of a image, for example, we increase the brightness of each pixel. For this reason, modern GPUs (graphics processing units) has evolved into extremely powerful data-parallel processors.
Task-level Parallelism
Finally, we reach what the most people think of as parallelism-multiple processors. From a programmer's point of view, the most important distinguishing feature of a multiprocessor architecture is the Memor Y model, specifically whether it ' s shared or distributed.
The most critical point is that computers use parallel technologies at different levels . What I discussed before is actually confined to the task-level layer, where parallelism is undoubtedly a subset of concurrency. But parallelism is not a subset of concurrency, because parallelism on bit-level and Instruction-level does not belong to concurrency-for example, 32-bit computers that have citation cited perform 32-digit addition, while processing 4 bytes is obviously a parallel, but they are all part of the 32-bit addition There are no multiple tasks, and there is no concurrency at all.
So, the correct argument is this:
Parallelism refers to the simultaneous execution of physics, and concurrent programming that allows multiple tasks to be logically interleaved.
As I understand it now, concurrency is targeted at task-level and higher levels, while parallelism is not limited. This is also the difference between them.
Comment Content:
Chicken Wings · 3 months ago
In fact, I think it is more suitable and easier to understand from the perspective of computer system development.
From Instruction-level parallelism, this actually and task-level parallelism is closely related. From the early computer view, any strict sense of task-level parallelism necessarily corresponds to instruction-level parallelism, and Instruction-level Parallelism must depend on the hardware. This view is easy to accept because the computer is essentially a batch system that does not have external intervention and does not exist in programming languages, only directives.
Any computer that belongs to the von Neumann structure, where the CPU (or nuclear) is necessarily a serial execution instruction. Therefore, on any single CPU machine, there is no strict sense, or the narrow sense of parallelism-at the command level of the strict meaning of parallelism, refers to a small enough time, can allow more than one command in the execution.
Therefore, in the early computer, the implementation of strict parallelism can only take the method of increasing the number of computers. In this case, task-level parallelism and instruction-level parallelism are almost indistinguishable.
To run multiple programs on this computer at the same time, the technology used in history is multitasking.
(Here's a digression, if you've seen how early the computer writes instructions, how to enter instructions, how to output--simply, from moving tape to moving tapes, see Gopher's graphs of parallel concurrency are a lot better to understand.) These pictures are just to the time of the brick-and-tile workers, ah no computer pioneers salute AH. )
Here you need to explain why there are multiple tasks. As we know, the program can be broadly divided into compute-intensive and IO-intensive two extreme, and commercial programs are mostly biased IO-intensive, so the early computer in a very limited computational resources, there has been a problem of IO and computational power mismatch, that is, the time spent on the IO is much more than the calculation time. Of course, this problem did not lead to concurrency scenarios, but instead led to a hardware parallel-such as the IBM 1401, such as the emergence of a data processing machine. Of course, after the third generation of computers in the emergence of spooling technology will no longer need such a machine.
Back to multitasking. The IO operation takes a long time and does not require a CPU, but the CPU must wait for the input to complete even if it is not executing the instruction, which causes the CPU to be idle. In those days, most of the computational resources were sold by time, and this kind of idleness was a great waste. So why can't we let the CPU execute the instructions while waiting for IO? This naturally leads to the concept of multi-tasking (multitasking). The idea of this concept is very well understood, that is, when a program for IO or other time-consuming but not CPU-intensive operation, the program is switched out to execute the instructions of another program. Then wait until the previous program Io is finished, and then switch back to continue execution, in order to prevent the waste of computing resources. This toggle is context switch.
In fact, this is a concurrency. On computers in that era, the "sense of parallelism" caused by concurrency is not too strong because computing resource constraints are likely to result in a single task being significantly more complete than expected. But on modern computers, it's easy to create an illusion that two tasks can be executed simultaneously.
So we can easily see that the concurrency here is essentially a multi-operator shared use of computing resources, which is designed to reduce the waste caused by idle resources-so, naturally, it involves the allocation of computing resources, which leads to the concept of scheduling and locking. In fact, if the word "calculate" is removed, this description can be extended to any shared use of resources in the computer, including all levels of the hierarchy, such as in Files, memory pages, process lines approached, etc.
Of course, the concurrency above is described in terms of a common implementation. As bloggers say, we can implement concurrency with real parallelism, such as running a separate process on each core in a dual-core CPU. We can also implement the concurrency of these two processes on a single-core CPU through program scheduling.
And in the strict sense of parallelism, at the level of instruction, it is necessary to mean that there are more than one instruction at the same time at the execution stage, so we have to "we move into a multicore world" only. Even if the pipelining in the example is not parallel to the CPU of a single core, it is just that every moment, if, ID, EX, MA, WB, the five steps corresponding to the circuit unit will not idle only--in fact, this idea is concurrent, as the blogger said. Concurrent design often requires the process to be torn apart because it is not possible to perform multiple tasks at the same time without splitting. This split can be a parallel split, such as abstraction into a similar task, or it can be non-parallel, such as divided into multiple steps. "-so concurrency is not limited to programming
Task-level, it's an abstraction of a solution to resource sharing and usage issues.
A good article about concurrency and parallel concepts, with the great God comment