Introduction to MPI

Source: Internet
Author: User

Introduction to MPI

When it comes to parallel computing, we have a topic--MPI programming that cannot be bypassed. MPI is a cross-language communication protocol for writing parallel computers. Supports point-to-point and broadcast. MPI is an information-passing application interface that includes protocols and semantic descriptions that indicate how they perform their features in various implementations. The goal of MPI is high performance, scale, and portability. MPI is still the main model for high-performance computing today. Unlike OpenMP parallel programs, MPI is a parallel programming technique based on information transfer. The messaging interface is a programming interface standard, not a specific programming language. In short, the MPI standard defines a set of portable programming interfaces.
The author in the previous article "How to configure MPI Parallel programming environment on the win10+vs2013" in detail how to configure the MPI environment in the WIN10 environment, but also do not configure the programming environment of the small partners recommend to view this article, in order to learn later (after all, the parallel machine is not what you want to have).

MPI Basic functions

The total number of MPI invoke excuses is huge, but the number of MPI calls is limited according to the actual experience of compiling MPI. Here are 6 of the most basic MPI functions.
1.? Mpi_init (...);
2.? Mpi_comm_size (...);
3.? Mpi_comm_rank (...);
4.? Mpi_send (...);
5.? MPI_RECV (...);
6.? Mpi_finalize ();
Here we are using a simple example to illustrate the basic usefulness of these 6 MPI functions.

function Introduction 1. int Mpi_init (int* argc, char** argv[])

The function should normally be the first called MPI function for parallel environment initialization, and the code behind the Mpi_finalize () function will be executed once in each process.
–? All of the remaining MPI functions, except mpi_initialized (), should be called after.
–? The MPI system will get command-line arguments via ARGC,ARGV (that is, the main function must have parameters, otherwise an error will occur).

2. int mpi_finalize (void)

–? Exit the MPI system, and all processes must be called to exit gracefully. Indicates the end of the parallel code, ending other processes other than the main process.
–? The serial code can still run on the main process (rank = 0), but no more MPI functions (including Mpi_init ()).

3. int mpi_comm_size (Mpi_comm Comm, int* size)

–? Gets the number of processes size.
–? Specifies a communication child that also specifies a set of processes that share the space that comprise the group of the communication.
–? Gets the number of processes that are contained in the group specified in the communication sub-comm.

4. int Mpi_comm_rank (Mpi_comm Comm, int* rank)

–? Get the rank value of this process in the communication space, that is, the logical number in the group (the rank value is an integer between 0 and P-1, which is equivalent to the ID of the process. )

5. int mpi_send (void *buff, int count, mpi_datatype Datatype, int dest, int tag, Mpi_comm Comm)

–void *buff: The variable you want to send.
–int Count: The number of messages you send (note: Not the length, for example you want to send an int integer, fill in 1 here, if you send a "hello" string, this is filled in 6 (C language string does not have a terminator, need more than one)).
–mpi_datatype Datatype: The type of data you want to send, which you need to use MPI-defined data types, can be found online, no longer listed here.
–int dest: The destination process number, which process you want to send to, fill in the process number of the destination process.
–int Tag: The message label, the receiver needs to have the same message label to receive the message.
–mpi_comm Comm: Communication domain. Indicates which group you want to send a message to.


Parameter Description 6. int mpi_recv (void *buff, int count, mpi_datatype Datatype, int source, int tag, Mpi_comm Comm, Mpi_status *status)

–void *buff: Which variable do you want to save the messages you received?
–int Count: The number of messages you receive messages (note: Not length, for example you want to send an int integer, fill in 1 here, if you send a "hello" string, fill in the 6 (C language string does not have a terminator, need more than one)). It is the upper bound of the received data length. The exact length of the data received can be obtained by calling the Mpi_get_count function.
–mpi_datatype Datatype: The type of data you want to receive, which you need to use MPI-defined data types, can be found online, no longer listed here.
–int Dest: The Receive-side process number, the process number that you need to fill in the receiving process to receive the message.
–int Tag: A message label that requires the same message label as the sender's tag value to receive the message.
–mpi_comm Comm: Communication domain.
–mpi_status *status: Message status. When the receive function returns, it will hold the status information of the actual received message in the variable indicated by this parameter, including the source process identity of the message, the message label, the number of data items included, and so on.

Example

The basic functions are already covered, so let's use an example to enhance the understanding of these basic functions.

#include<stdio.h>#include<string.h>#include"Mpi.h"void Main (int argc,char* argv[]) {int Numprocs, myID, source;Mpi_status Status;Char message[100];Mpi_init (&AMP;ARGC, &AMP;ARGV);Mpi_comm_rank (Mpi_comm_world, &myid);Mpi_comm_size (Mpi_comm_world, &numprocs);if (myID! =0) {Non-No. 0 process sends message strcpy (MSG, "Hello world!"); mpi_send (Message, strlen (message) + 1, mpi_char, 0, 99, mpi_comm_world); } else {//myID = = 0, that is, process No. 0 receives message for ( Source = 1; source < Numprocs; source++) {MPI_RECV (Message, Span class= "Hljs-number" >100, mpi_char, source, 99, mpi_comm_world, &status); printf ( received the message sent by process%d:%s\n ", source, message);}} mpi_finalize (); /* end main */           

The result of the operation is as shown


Execution results

As can be seen, when the author opened four thread run, the 1-3 process sends a message, the NO. 0 process receives the message and prints; When the author opens the eight thread run, the 1-7 process sends the message, and the NO. 0 process receives the message and prints it.


Message sending schematic
This article uses a standard blocking method of receiving a send. Message passing is the feature of MPI and the difficulty of our learning. This is what we have to learn from MPI. Some important instructions for sending a message to the parameters of the receive function. 1.MPI The information that identifies a message contains four fields:

–? Source: The send process implicitly determines that the process rank value is uniquely identified.
–? Purpose: The Send function parameter is determined.
–? Tag:send function parameters are determined, (0,ub) 232-1.
–? Communication sub: Default Mpi_comm_world
?? Group: Limited/N, ordered/rank [0,1,2,... N-1]
?? Contex:super_tag, which is used to identify the communication space.

2. Use of buffer

Buffer must hold at least count of the data of the type specified by datatype. If the receiving buf is too small, it will cause overflow, error

3. Message Matching

–? Parameters Match Source,tag,comm/dest,tag,comm.
–? Source = = Mpi_any_source: Receives data from any processor (any message source).
–? Tag = = Mpi_any_tag: A message that matches any tag value (any tag message).

4. Source = = Dest is not allowed in a blocking message, otherwise it will cause a deadlock. 5. Messaging is restricted to the same communication domain. 6. You must specify a unique recipient in the Send function.

Wild hands with no objects
Links: https://www.jianshu.com/p/2fd31665e816
Source: Pinterest
The copyright of the book is owned by the author, and any form of reprint should be contacted by the author for authorization and attribution.

Introduction to MPI

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.