Ramble on concurrency and parallelism

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

0x00 Preface

More worried about the end of the day you will fall into the use of various tools, and neglect of some basic knowledge of learning. Therefore, begin to organize some knowledge in series.

This article is concerned about concurrency and parallelism, although the ramble, in fact, are reading to read all kinds of articles, the basic theory is also gathered out. I just made a Porter + myself to understand the slightest.

Article structure

    • Overview, roughly describing the differences between concurrency and parallelism
    • Two excerpts of the difference between parallelism and concurrency, the English paragraph is very good writing.
    • 4 parallel architectures are listed
    • To put a C + + multithreading example

0x01 Overview

Concurrency is the ability to handle multiple things at the same time (dealing with)! Parallelism is the ability to do many things at the same time (doing)! --rob Pike

So how do we know what concurrency is? What is parallelism?

    • Concurrency: Concurrent programs contain multiple logically independent blocks of execution that can be executed independently or through execution. Pay attention to the word Independence , which is important for us to understand these concepts.
    • Parallelism: Parallel programs often solve problems much faster than serial programs, because they can perform multiple parts of the entire task at the same time, and parallel programs may have multiple independent execution blocks, or perhaps only one?

Look at their differences in a different way: concurrency is a concept in the problem domain-designed to handle multiple simultaneous events, and parallelism is a concept in a method domain-accelerating problems by executing multiple parts of the problem in parallel.

Excerpt from 0x02

Excerpt from some other channels to get back.

Excerpt 1 (From: User-aware)

    • Concurrency (Concurrency) is a lot of things at the same time (dealing with lots of the things at once), parallel (Parallelism) is doing many things at the same time (doing lots of things at once);
    • There is a correlation, but not the same concept: concurrency can be thought of as a design pattern for a logical structure. You can design the model in a concurrent way, and then run it on a single-core system, creating a parallel illusion through a dynamic logical switchover of the system. At this point, your program is not parallel, but it is concurrent. You can run this model on a multicore system without modification, at which point your program can be considered parallel. Here, parallelism is more concerned with the execution of the program (execution);
    • In computers, we usually introduce independent running entities to model the concurrency model, such as:
      • Operating system-level processes and threads;
      • Concurrent entity concepts built into the programming language:
        • such as the Goroutine (CSP model) in Golang;
        • Process in Erlang (Actor model);
    • The real world is parallel, and the human brain is parallel.

Excerpt 2 (from a foreign friend)

This answer is very good, it is worth to look carefully over.

The terms concurrency and parallelism is often used in relation to multithreaded programs. But what exactly does concurrency and parallelism mean, and is they the same terms or what?

The short answer is "no". They is not the same terms, although they appear quite similar on the surface. It also took me some time to finally find and understand the difference between concurrency and parallelism. Therefore I decided to add a text about concurrency vs. parallelism to this Java concurrency tutorial.

Concurrency

Concurrency means that's application is making progress on more than one task at the same time (concurrently). Well, if the computer is only have one CPU the application may not make progress on more than one task at exactly the same Tim E, but more than one task was being processed at a time inside the application. It does not completely finish one task before it begins the next.

Parallelism

Parallelism means that a application splits its tasks up to smaller subtasks which can be processed in parallel Stance on multiple CPUs at the exact same time.

Concurrency vs. Parallelism in Detail

As you can see, concurrency are related to how a application handles multiple tasks it works on. An application could process one task at at time (sequentially) or work in multiple tasks at the same time (concurrently).

Parallelism on the other hand, was related to how a application handles each individual task. An application could process the task serially from start to end, or split the task up into subtasks which can be completed In parallel.

As can see, an application can is concurrent, but not parallel. This means is it processes more than one task at the same time, but the tasks is not broken down into subtasks.

An application can also is parallel but not concurrent. This means, the application only works on one task at a time, and this task is broken down to subtasks which can be Processed in parallel.

Additionally, an application can is neither concurrent nor parallel. This means. It works on only task at a time, and the task was never broken down to subtasks for parallel executio N.

Finally, an application can also is both concurrent and parallel, in this it both works on multiple tasks at the same time , and also breaks each task to the subtasks for parallel execution. However, some of the benefits of concurrency and parallelism May is lost in this scenario, as the CPUs in the computer is Already kept reasonably busy with either concurrency or parallelism alone. Combining it may leads to only a small performance gain or even performance loss. Make sure to analyze and measure before you adopt a concurrent parallel model blindly.

0x03 Parallel architecture

1. Bit-level (bit-level) parallelism

Why do 32-bit computers run faster than 8-bit computers? Because of parallelism. For two 32-digit additions, 8-bit computers must perform multiple 8-bit computations, while 32-bit computers can be completed in one step, which is to handle 32 bytes of 4 bits in parallel. The development of computers has experienced 8, 16, 32, and is now in the 64-bit era. However, the performance improvement caused by the bit upgrade is a bottleneck, which is why we cannot enter the 128-bit era in the short term.

2. Instruction-level (instruction-level) parallelism

Modern CPUs have a high degree of parallelism, and the techniques used include pipelining, disorderly execution, and guessing execution. Programmers often do not care about the details of parallelism inside the processor, because, despite the high degree of parallelism inside the processor, it is well-designed to look externally to all processing as if it were serial. And this "looks like serial" design has gradually become inapplicable. Processor designers are becoming more and more difficult to increase the speed of a single core.

In the multi-core era, we have to face the fact that directives are no longer serially executed, either superficially or substantively.

3. Data-level (Data-level) parallelism

The same operation is applied to large amounts of data in parallel. This is not a good solution for all problems, but it can be useful in a suitable setting. Image processing is a scenario that is suitable for data-level parallelism. For example, to increase the brightness of a picture, you need to increase the brightness of each pixel. The modern GPU (graphics processor) has evolved into an extremely powerful data parallel processor due to the image processing characteristics.

4. Task-level (task-level) parallelism

From the programmer's point of view, the most obvious classification feature of a multiprocessor architecture is its memory model (shared memory model or distributed memory model).

    • For multi-processor systems with shared memory, each processor has access to the entire memory, and the communication between the processors is primarily through memory.
    • For distributed memory multiprocessor systems, each processor has its own memory, and the communication between the processors is performed primarily over the network.

Using memory communication is easier and faster than communicating over the network, so programming with shared memory is often easier. However, as the number of processors increases, shared memory suffers a performance bottleneck-and then has to move to distributed memory. If you want to develop a fault-tolerant system, you need to use more than one computer to avoid the impact of hardware failures on the system, you must also rely on distributed memory.

0x04, give me a chestnut.

In front of all is the theory, this put a very simple C + + Multithreading example bar. The program is so simple that it is no longer spoken.

#include <pthread.h>#include "stdio.h"using namespace std;#define NUM_THREADS 3// 线程的运行函数void* say_hello(void* args){    printf ("Hello Dante! You're Great!\n");}int main(){    // 定义线程的 id 变量,多个变量使用数组    pthread_t tids[NUM_THREADS];    for(int i = 0; i < NUM_THREADS; ++i)    {        //参数依次是:创建的线程id,线程参数,调用的函数,传入的函数参数        int ret = pthread_create(&tids[i], NULL, say_hello, NULL);        printf ("Hello Dante! You're Gorgeous!\n");        if (ret != 0)        {           printf ("pthread_create error: error_code=%d \n", ret);        }    }    //等各个线程退出后,进程才结束,否则进程强制结束了,线程可能还没反应过来;    pthread_exit(NULL);}

Run the results as follows, pay attention to the results of the printing, the second print and the rest is not the same? Why not the same, do not add.

dante@DESKTOP-AE2RHL0:/mnt/d/workspace/c++$ ./a.outHello Dante! You're Great!Hello Dante! You're Gorgeous!Hello Dante! You're Gorgeous!Hello Dante! You're Great!Hello Dante! You're Gorgeous!Hello Dante! You're Great!

0X05 Summary

Concurrent programming is still a lot to learn, and different languages for concurrent programming support is different, post-order I will summarize and collate the various concurrency models, through different language examples to illustrate.

0xFF Reference

    • GitHub Address:https://github.com/dantezhao/concurrency_and_parallelism/blob/master/simple_cpp_example/hello_world.cc
    • Answer: https://www.zhihu.com/question/33515481/answer/135306366
    • Jakob jenkov:http://tutorials.jenkov.com/java-concurrency/concurrency-vs-parallelism.html
    • Seven-day seven concurrency model

Author:Dantezhao | CSDN | GITHUB

Article Address: http://www.jianshu.com/p/930903b35588
Personal homepage: HTTP://WWW.JIANSHU.COM/U/2453CF172AB4
The article may be reproduced, but must be in the form of hyperlinks to indicate the original source and author information

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.