MPI concurrent program development and design-1. Parallel Computer

Source: Internet
Author: User
Cheng introduction:

Message Passing interface (MPI) is currently the most important parallel programming tool and environment. Almost all important parallel computer vendors provide support for it, MPI integrates functions, efficiency, and portability into three important and conflicting aspects. This is an important reason for the success of MPI.

SIMD/MIMD Parallel Computer:

Commands and data are two basic aspects involved in solving computer problems, that is, let the computer "execute what" operations and perform corresponding operations on "What object, although computers have developed a lot, they still have an important position and role. This is why the division of command data is still in use. A computer with this function can be called a parallel computer, whether it is executing multiple commands at the same time or processing multiple data at the same time.

A parallel computer can be divided into SIMD (single-Instruction Multiple-data) based on the number of commands that a parallel computer can execute and process data simultaneously) single-instruction multi-Data Parallel Computers and MIMD (Multiple-Instruction Multiple-data) Multi-instruction multi-Data Parallel computers.

Example: Functions of the SIMD computer
The SIMD computer simultaneously uses the same command to operate on different data, such as the array assignment operation.
A = a + 1
On the SIMD parallel machine, you can use the addition command to add 1 to all elements of array. That is, array (or vector) operations are especially suitable for execution on SIMD parallel computers. SIMD parallel machines can directly support this operation form and implement it efficiently.

An example is provided to illustrate the functions of the MIMD Computer.
The MIMD Computer also has multiple commands to operate on different data, such as arithmetic expressions.
A = B + C + D-E + F * g
Can be converted
A = (B + C) + (D-E) + (F * g)
Addition (B + C), subtraction (D-E), multiplication (F * g) if there is a corresponding Direct Execution Component, then these three different computations can be done at the same time.

SPMD/mpmd Parallel Computer
Although SIMD and MIMD expressions are still widely used, with the emergence of new parallel computer organization methods, we can compare them with the preceding division methods, based on the difference between the program and data executed at the same time, SPMD (single-program multuple-data) Single Program Multi-Data Parallel Computer and mpmd (Multiple-Program Multiple-data) are proposed) the concept of multi-program, multi-Data Parallel computers.

Classification of Parallel Computers:
<! -- [If! Supportlinebreaknewline] -->
<! -- [Endif] -->

Parallel computers with shared memory
For parallel computers with shared memory, each processing unit exchanges information through access to the shared memory and coordinates the processing of parallel tasks by each processor. This programming of shared memory is relatively simple to implement, but shared memory is often an important bottleneck for performance, especially scalability.
Parallel computers with distributed memory
For parallel computers with distributed memory, each processing unit has its own local memory. Because there are no public storage units available, each processor exchanges information through message transmission, coordinates and controls the execution of various processors. This is the way in which parallel Computers are stored in the message passing parallel programming model introduced in this book. It is not hard to see that communication has an important impact on the performance of distributed memory parallel computers. The compilation of complex message passing statements has become a difficulty in parallel program design on such parallel computers. However, this type of parallel computer is widely used because of its excellent scalability and high performance.
Parallel computers with Distributed Shared Memory
Parallel computers with Distributed Shared Memory combine the features of the previous two and are an important development direction for the new generation of Parallel computers. Most of the more popular cluster computing architectures use this structure. By improving the computing capability of a local node and making it a so-called "Supernode", it not only improves the computing capability of the entire system, but also improves the scalability and scalability of the system, it is conducive to the rapid construction of ultra-large computing systems. See the figure below:


<! -- [If! VML] --> <! -- [Endif] -->

What are the steps for solving physical problems on parallel computers?
Multi-level ing


I have never learned parallel computers. Well, I still need to find some materials to check whether I can understand them.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.