Analysis and comparison of five main multi-core parallel programming methods

Source: Internet
Author: User

With the advent and popularity of the multicore era, the traditional single-threaded serial programming model will change, instead of parallel programming. There are currently five major parallel programming models, the following will be a summary of the five models of analysis and comparison:

1. MPI

The MPI (Message passing Interface) messaging interface is a library published by the MPI forum, rather than an implementation language that supports C/c++/fortran. is a message-passing programming model for interprocess communication services. MPI provides a platform-independent standard for writing message-passing programs that can be used extensively. Using it to write a messaging program is not only practical, portable, efficient, and flexible, but does not change much with the existing implementations. The main implementations of MPI currently have the following three types:

Pros: Can be used on a cluster or on a single-core/multicore CPU, which coordinates parallel computing between multiple hosts, so scalability on a parallel scale is strong and can be used on supercomputers from PC to World TOP10.

Disadvantage: First, based on message delivery, it is necessary to display division and distribution computing tasks, display the message delivery and synchronization, and not easy to incrementally develop the parallelism of serial programs; second, the parallel computation is coordinated by using interprocess communication, which results in lower parallel efficiency, large memory overhead, no visual and programming trouble.

Reference: MPI Forum

2. OpenMP

OpenMP (open Multi Processing) is a specification for parallel programming published by the Open ARB, an extension built on a serial language that can now be used in C/c++/fortran.

OpenMP consists of three parts: the compiler Guide (compiler Directive), the runtime library, and the environment variable (environment variables). The language model is based on the assumption that the execution unit is a thread that shares an address space, that is, OpenMP is a derived/connected (Fork/join)-based programming model. The parallel mechanism of the fork/join is as follows:

Fork/join parallel mechanism: Before the parallel region, the serial command derives multiple parallel commands in parallel, executes to the end of the parallel zone, waits for all parallel tasks, and then goes to serial execution.

OpenMP has two common forms of parallel development: one is to parallelize the serial program through simple Fork/join, and the other is to parallelize the serial program with a single program majority.

Advantages: First, the shared storage model, so that programmers do not have to divide and distribute data, making it easier to develop parallel programs, second, more suitable for the SMP system, third, mainly for the loop-level parallel development, can easily achieve incremental parallelization.

Cons: First, OpenMP is only suitable for SMP structures; second, OpenMP mainly develops loop-level parallel programs, which are not suitable for some applications; Thirdly, OpenMP writing, correctness debugging and performance scheduling are complex.

Reference: Open ARB-OPENMP

3. Intel IPP

Intel IPP (Integrated Performance Primitives), Intel integrated performance primitives are the second generation of the Intel function library. Intel publishes an IPP library of functions for each new multi-core processor, designed for multi-core architectures, and provides a library of scheduling optimizations for math, signal processing, audio and video, image processing and encoding, string, and cryptography. The composition of the IPP is as follows:

Advantages: It is a highly optimized library with high execution efficiency.

Cons: Dedicated to Intel processors and some areas that are inconvenient to migrate.

Reference: Intel IPP Product introduction

4. Intel TBB

Intel TBB (Threading Building Blocks), an Intel threading building block, is a C + + template library for creating reliable, portable, and extensible parallel programs. Specialized in C + + programs that write high-level abstractions, and portable programs.

Advantages: Portable, extensible.

Disadvantage: Performance does not have IPP high.

Reference: Intel TBB Product Introduction

5. MapReduce

Mapreducesh is a model developed by Google to develop a distributed programming model for massive data processing in large groups.

Reference: http://www.mapreduce.org/

In addition, the parallel programming mode also has X3H5, phreads, HPF, etc., but are not commonly used.

Analysis and comparison of five main multi-core parallel programming methods

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.