1.1 What is parallel

Source: Internet
Author: User

On the simplest and most basic level, parallelism means that two or more actions occur at the same time, the parallel we encounter is the natural existence of life, we can talk while walking or each hand carry out different movements, of course, each of us is our life independent of each other, you can go to see the football game when I swim Wait a minute.

1.1.1 Parallel in a computer

When we talk about computer parallelism, we refer to a system that performs several separate tasks rather than sequentially, or one by one, and it is not a new phenomenon: multitasking operating systems allow one computer to run multiple programs at the same time even though the task switch has been running for many years. But the multi-process real parallel of high-end servers is used longer. Real-life multitasking is more popular than just giving people a multitasking feel.

Historically, many computers have only one process, but the core of the process still exists in many of today's desktop machines, and such machines can only actually perform one task at a time, but it can switch between tasks many times per second. By performing a little task and then performing a bit of another task, it looks like the task is happening at the same time, which is called task switching, and we still talk about such a system, because the task transitions quickly, you cannot determine at what point the task is executed and suspended, and the task switch provides a virtual parallel to the two users and the program itself. Because there is only one virtual parallel, a program running in a single-process task-switching environment may be more ingenious than a program in a real-world parallel environment. The particular memory model of the error may not appear in such an environment, and we will discuss it in the 10th chapter.

Computers that contain multiple processes have been used in servers and high-end PC tasks for many years, but now that computers have multiple cores on a chip are already popular in desktop machines, they have multiple processes or multi-core in one process, and these computers can support real multitasking. We call it hardware parallelism.

Here is an idealized scenario where the computer performs two tasks exactly. Each of the 10 equal modules, on a dual-core machine, each task can be run on its own core, in a single-core machine to do task switching, each task has a gap between the modules. In order to perform cross-running, the system must be context-switched during task switching, which takes time, in order to run context switches, the system must save the CPU state and run the task information, switch, reload the CPU state for the task to run, the CPU may need to reload the memory and cache the data. This can cause delays.

Although hardware parallelism is available in multi-drink multi-process systems, some processes can run multithreaded in a single core, most importantly a true number of hardware threads, and how many independent tasks can be run in parallel. Even though a system has real hardware parallelism. It's easy to have more tasks in parallel than hardware, so task switching still needs to be used. For example, there are hundreds or thousands of tasks running on a particular desktop computer, performing background operations, although the computer is nominally idle. Task switching allows these daemons to run, allowing you to work on word processing, compiling, editing and browsing the site simultaneously, 1.2 showing 4 quests switching on a dual-core machine. There is an idealized scene task divided into several modules, in fact there are a lot of problems resulting in split unequal and irregular scheduling. These issues will be discussed in the 8th chapter of our parallel code that affects efficiency and performance.

Whether your program runs on a single-core machine or a multi-core machine, all of the techniques, functions, and classes covered by this book can be used, regardless of whether the task is implemented in parallel or in real parallel. But as you associate, how to use parallelism in your program may depend on hardware parallelism, which will be discussed in chapter 8th.

1.1.2 Concurrency Implementation method

Imagine two programmers working at the same time in a software engineering, if your developers in two offices, they may be peaceful to carry out their work, do not communicate with each other, they have their own manuals, however, communication is not very simple, than they turned to speak, They have to use the phone or email or go to another office, you have to have two offices and a copy of multiple manuals.

Now imagine you move your developers to an office where they can talk to each other and they are easy to share their ideas, there are now only one office, and a manual, instead they may find it hard to concentrate and they may want to share resources.

There are two ways to organize your developers to implement parallelism, each developer represents a county, each office represents a process, the first implementation method is a multiple single-threaded process, it is similar to each developer has its own office, the second is multiple threads in a process, similar to two developers in one office.

You can make any combination of styles, multi-process, multi-process and single-threaded, but the principle is the same, let's take a look at two parallel implementations in the program.

Multi-process parallelism:

The first method, in one program, uses parallel to split into multiple split single-threaded processes to run. Just as you can run your browser and word processor at the same time, separate processes can transmit messages through interprocess communication technology (signals, sockets, files), and the negative is that such communications are complex or slow because the operating system provides a lot of protection between processes to avoid one process modifying data from another process, In addition there is an intrinsic overhead between processes: It takes time to open a process, and the operating system must allocate internal resources to manage the process.

Of course, it is not all negative: the OS added inter-process protection mechanism and high-level communication mechanism means that it can more easily write more secure synchronization function, indeed, for the Erlang programming language provided by the environment using process as a basic parallel compilation module to achieve better results.

Using separate processes to implement parallelism also has an additional benefit that you can run different processes over a network connection running on different machines, although this will increase communication overhead, but it can improve efficiency and performance.

Multithreading Parallel:

Another approach is to implement parallelism in a process through multithreading. Threads are like lightweight processes: Each thread is independent of each other, but each thread may run a different sequence of instructions, but all threads share the address space in the process, and most of the data can be accessed directly by all the threads, global variables, pointers, or references that can be transferred to the County building. Therefore, it often shares memory through the process, which is very difficult to manage because the memory address of the same data does not require a different process.

The overhead of shared address space and unprotected data between threads can be smaller than many processes, because the operating system does not register, but the flexibility of shared memory brings some cost: if the data is accessed by multithreading, the program developer must ensure that the data is constant, and that the issue is 3,4,5 and 8 chapters of this book, This problem is insurmountable, and when writing code, give proper attention, which means that it takes a lot of effort to consider inter-thread communication

Multithreaded overhead is much lower than multi-process means that multithreading is advantageous in parallel in the mainstream language, although there are some problems with shared memory between threads, but the C + + standard does not provide any technology to support interprocess communication, so the use of multi-process programs must rely on the API of the specified platform to do, the book will focus on the use of multi-threaded implementation Future involves using multithreading in parallel by default.

Already a simple understanding of what is parallel, let's look at what is used in parallel.

1.1 What is parallel

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.