Talking about multi-core CPU, multi-threading, multi-process

Source: Internet
Author: User

1.CPU Development Trend

The number of cores will continue to increase, according to Moore's Law, because a single core performance improvement has a serious bottleneck problem, the average desktop PC is expected to reach 24 core (or 16 core 32 threads) in the early 2018 of 2017, how can we face this sudden rise in the number of cores? Programming also needs to evolve with the times. I venture to predict that the chip bus between each core of the CPU will be connected by 4-way group:), because the whole connection is too complex, single-bus and not enough to force. And it should be a non-Chenduo core processor, which may be mixed with several DSP processors or stream processors.

2. The difference between multi-threading and parallel computing

(1) The role of multithreading is not only used for parallel computing, he also has a lot of useful functions.

Also in the single-core era, multithreading has a wide range of applications, when multithreading is mostly used to reduce blocking (meaning similar to

while (1)

{

if (flag==1)

Break

Sleep (1);

}

This code brings the CPU resources idle, note that there is no wasted CPU resources, remove sleep (1) is a pure waste.

When did the blockage occur? It is generally waiting for IO operations (disk, database, network, etc.). At this point, if a single thread, the CPU will dry the matter (not related to this program is not a practical thing, because the implementation of other programs for me is meaningless), inefficient (for this program), such as an IO operation takes 10 milliseconds, the CPU will be blocked nearly 10 milliseconds, this is how waste ah! You know, the CPU is counting the nanoseconds.

So this time-consuming IO operation is carried out on the thread of the threads, and the function (code) that creates the thread is not blocked by the IO operation and continues to do other things in the program, rather than waiting (or executing other programs).

Also in this single-core era, the role of multithreading to eliminate blocking can also be called "concurrency", which is fundamentally different from parallelism. Concurrency is "pseudo-parallel," seemingly parallel, and actually a CPU is doing everything, just switching too fast, we can not detect it. For example, the UI-based program (as the saying goes is the graphical interface), if you click a button to trigger the event needs to execute 10 seconds, then the program will be suspended animation, because the program is busy execution, no time to respond to the user's other actions, and if you assign this button trigger function to a thread and then start the thread to execute, Then the program will not suspend animation, continue the corresponding user's other operations. However, the following is the thread of mutual exclusion and synchronization, deadlock and other issues, see the relevant literature in detail.

Now is the multi-core era, the mutual exclusion and synchronization of this thread is more serious, the single-core era is mostly concurrency, the multi-core era is really very different, why? Please refer to the relevant literature for details. I would like to briefly explain that the use of volatile variables previously can solve most problems, such as multiple threads common access to a flag flag bit, if it is single-core concurrency, there is no problem (p.s.) Under what circumstances? Flag has multiple, or an array, this time only through the logical means to solve the problem, more than a few idling do not matter, do not have a fatal problem on the line, because the CPU only one, while access to this flag bit can only have one thread, and multi-core case is not the same, So only volatile is not very able to solve the problem, it is necessary to use specific language, the specific environment of the "semaphore", Mutex,monitor,lock and so on, these classes are operating on the hardware "off interrupt" to achieve the "original language" effect, access to the critical area is not interrupted effect, Specifically, the reader can look at "Modern operating systems".

(2) Parallel computing can also be obtained by other means, and multithreading is only one of them.

Other tools include: multi-process (which also includes shared storage and distributed multi-machine, and hybrid), instruction-level parallelism.

ILP (instruction-level parallelism), x86 architecture called SMT (Simultaneous multithreading), in the MIPS architecture corresponding to the Super scalar (superscalar) and disorderly execution, there are differences between the two, but the common point is to achieve instruction-level parallelism, which is the user can not control, not part of the programming scope, Only limited optimizations can be made, and this limited optimization may only be within the scope of the compiler's jurisdiction, and the user can do very little.

(3) A typical language suitable for parallel computing

Erlang and MPI: The two former are languages, the latter is a C + + and FORTRAN extension library, the effect is the same, using multi-process to achieve parallel computing, Erlang is shared storage, MPI is a hybrid.

c#.net4.0: The new version 4.0 can implement parallel for loops with a small amount of code, which requires cumbersome code to achieve the same functionality. This is the use of multithreading to achieve parallel computing. Java and c#3.5 are wired pool (ThreadPool), is also a good use of multi-threaded management class, you can easily and efficiently using multithreading.

CUDA, or a newborn calf, has great potential for development, but it is only limited in its application area at present. It can only use C language at present, and is not C99, low-level, can not use function pointers. Personal feeling this is due to the inherent limitations of the hardware (the average available memory per core is small, and the system memory communication for a long time), only for scientific computing, still image processing, video encoding and decoding, other areas, but not as high-end CPU. After the GPU has an operating system, can fully dispatch the GPU resources, the GPU can be a great God. Physical acceleration in the game, in fact, multi-core CPU can also do very well.

Other languages ... Well.. Reserved for future discussion.

3. The more threads the better? When is it necessary to use multithreading?

Threads are not necessarily the more the better, thread switching is also a cost, and when you add a thread, the additional overhead is less than the blocking time that the thread can eliminate, which is called value for money.

Linux has been handing out different threads to different cores since the 2.6 kernel began. Windows also supports this feature from nt.4.0.

When should we use multithreading? There are four scenarios to discuss:

A. multicore cpu--compute-intensive tasks. At this point, to maximize the use of multi-threading, you can improve the efficiency of task execution, such as encryption and decryption, data compression decompression (video, audio, normal data), otherwise only one core full, while the other core idle.

B. Single core cpu--compute-intensive tasks. At this time the task has been CPU resources 100% consumption, there is no need and can not use multithreading to improve computational efficiency; Conversely, if you want to do human-computer interaction, it is best to use multi-threading, to avoid users can not operate on the computer.

C. Single core Cpu--io intensive tasks, using multithreading or for human-computer interaction convenience,

D. Multi-core Cpu--io-intensive tasks, not to mention the same reasons as the single core.

4. Skills/Techniques that programmers need to master

(1) Reduce serialization code to improve efficiency. This is nonsense.

(2) Single shared data distribution: Copying a number of copies of a data so that different threads can access it at the same time.

(3) Load balancing, divided into static and dynamic two kinds. See the relevant literature for details.

Talking about multi-core CPU, multi-threading, multi-process

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.