Author:menglong TAN; Email:tanmenglong_at_gmail; Twitter/weibo: @crackcell; Source:http://blog.crackcell.com/posts/2013/07/15/mpi_quick_start.html.
Table of Contents
- 1 Preface
- 2 Development environment settings
- 3 Learn by example
- 3.1 Example 1:hello World
- 3.2 Code Structure
- 3.3 Some basic APIs
- 3.3.1 Initialization environment: Mpi_init
- 3.3.2 is initialized: mpi_initialized
- 3.3.3 Termination Environment: Mpi_finalize
- 3.3.4 gets the number of processes: Mpi_comm_size
- 3.3.5 gets the current process Id:mpi_comm_rank
- 3.3.6 gets the host name of the program run: Mpi_get_processor_name
- 3.3.7 terminates all processes of a communicator: Mpi_abort
- 3.4 Example 2: A little more complicated
- 3.5 Basic Communication API
- 3.5.1 Message data type
- 3.5.2 point-to-point communication API
- 3.6 Example 3: Blocked message delivery
- 3.7 Collaborative Communications API
- 3.7.1 block until other tasks in the same group are completed: Mpi_barrier
- 3.7.2 Broadcast message: Mpi_bcast
- 3.7.3 Broadcast message: Mpi_scatter
- 3.7.4 Collecting messages: Mpi_gather
- 3.8 Groups and communication device
1 Preface
I don't know why, the introductory tutorials for MPI seem to be small and unclear. Read some tutorials today to get started with the knowledge points you need to know.
2 Development environment Settings
Environment: Debian SID Installation Development environment:
$ sudo apt-get install openmpi-bin openmpi-doc libopenmpi-dev gcc g++
3 Learn by Example3.1 Example 1:hello world
<iostream><mpi/mpi.h>std; Main (return 0;}
Compile:
$ mpicxx-o Hello.exe hello.cpp
Run:
$ MPIRUN-NP./hello.exe
- The-NP 10 parameter developed 10 copies of the program running
3.2 Code Structure
Let's look at the code, the structure of the MPI program is generally:
- header file, global definition
- Initialize MPI environment: Mpi_init ()
- Distributed code
- Terminate MPI Environment: Mpi_finalize ()
- End
3.3 Some basic APIs3.3.1 Initialization environment:mpi_init
<mpi.h>mpi_init (int *char * * *argv)
3.3.2 is initialized:mpi_initialized
<mpi.h>mpi_initialized (int *flag)
3.3.3 Termination Environment:mpi_finalize
<mpi.h>mpi_finalize ()
3.3.4 Gets the number of processes:mpi_comm_size
Gets the number of processes in a communicator
<mpi.h>mpi_comm_size (int *size)
If Communicator is Mpi_comm_world, that is the number of processes that the current program can use
3.3.5 get current Process ID:mpi_comm_rank
<mpi.h>mpi_comm_rank (int *rank)
3.3.6 Gets the host name of the program run:mpi_get_processor_name
<mpi.h>mpi_get_processor_name (char *int *resultlen)
3.3.7 terminates all processes of a communicator:mpi_abort
<mpi.h>mpi_abort (errorcode)
3.4 Example 2: a little more complicated
#include<stdio.h>#include<mpi/mpi.h>IntMainIntargcchar *Argv[]) {Char hostname[mpi_max_processor_name]; int Task_count; int rank; int Len; int ret; ret = Mpi_init (&ARGC, &ARGV); if (mpi_success! = ret) {printf ("Start MPI fail\n"); Mpi_abort (Mpi_comm_world, ret); } mpi_comm_size (Mpi_comm_world, &task_count); Mpi_comm_rank (Mpi_comm_world, &rank); Mpi_get_processor_name (hostname, &len); printf ("Task_count =%d, my rank =%d on%s\n", Task_count, rank, hostname); Mpi_finalize (); return 0;}
Run it:
$ mpirun-np 3./hello3.exe Task_count = 3, my rank = 0 on crackcell-vm0 task_count = 3, my rank = 1 on crackcell-vm0 task _count = 3, my rank = 2 on crackcell-vm0
3.5 Basic Communication API
- MPI provides a caching mechanism for messages
- Messages can be sent in a blocking or non-blocking manner
- Sequential: MPI guarantees that the recipient receives the message in the same order as the sender
- Fairness: MPI does not guarantee the fairness of scheduling, programmers themselves to prevent process starvation
3.5.1 Message data type
For portability, MPI defines its own message data type, specifically reference 1
3.5.2 Point-to-point communication API
- blocked send: Mpi_send
int mpi_ Send (void *int Span style= "color: #ffd787;" >count, mpi_datatype datatype, int int tag, mpi_comm comm)
- non-blocking send: Mpi_isend
int mpi_ Send (void *int Span style= "color: #ffd787;" >count, mpi_datatype datatype, int int tag, mpi_comm comm)
- blocked receive: Mpi_recv
int mpi_ RECV (void *int Span style= "color: #ffd787;" >count, mpi_datatype datatype, int int tag, mpi_comm comm, status)
- Non-blocking receive: MPI_IRECV
MPI_IRECV (void *mpi_request *Request)
3.6 Example 3: Blocked message delivery
#include<stdio.h>#include<mpi/mpi.h>IntMainIntargcchar *Argv[]) {IntTask_count;IntRankIntDestIntSrcIntCountInttag = 1;Charin_msg;CharOut_msg =' X ';Mpi_statusStatus Mpi_init (&ARGC, &ARGV); Mpi_comm_size (Mpi_comm_world, &task_count); Mpi_comm_rank (Mpi_comm_world, &rank);if (0 = = rank) {dest = 1; src = 1; // send a character to 1, then wait to return Mpi_send (&out_msg, 1, Mpi_char, dest, Tag, mpi_comm_world); MPI_RECV (&in_msg, 1, Mpi_char, SRC, tag, mpi_comm_world, &status); } Else if (1 = = rank) {dest = 0; src = 0; // send a character to 0, then wait to return Mpi_recv (&in_msg, 1, Mpi_char, SRC, tag, mpi_comm_world, &status); Mpi_send (&out_msg, 1, Mpi_char, dest, Tag, mpi_comm_world); } mpi_get_count (&status, Mpi_char, &count); printf ("Task%d:recv%d char (s) from task%d with tag%d\n", Rank, count, status.) Mpi_source, status. Mpi_tag); Mpi_finalize (); return 0;} 3.7 Collaborative Communications API
- Cooperative communication must involve all processes in the same communicator
- Types of cooperative communication operations
- Synchronous operation: A process waits for other members of the same group to reach a synchronization point
- Data movement operations: broadcast, Scatter/gather operations
- Collaborative computing: A member collects data from other members and then performs an action
3.7.1 block until other tasks in the same group are completed:mpi_barrier
<mpi.h>mpi_barrier (comm)
3.7.2 Broadcast message:mpi_bcast
<mpi.h>mpi_bcast (void *comm)
3.7.3 Broadcast message:mpi_scatter
#include <mpi.h>int mpi_scatter (void * sendbuf, int sendcount , mpi_datatype sendtype, void *int Span style= "color: #ffd787;" >recvcount, mpi_datatype recvtype, int root, mpi_comm
3.7.4 collect messages: mpi_gather
<mpi.h>mpi_gather (void *void *comm)
More API Reference 2.
3.8 groups and communication device
- A bunch of orderly processes that make up a group, each process has a unique integer identifier
- A communicator organization has a bunch of processes that need to communicate with each other. Mpi_comm_world contains all the processes
Group is used to organize a set of processes that Communicator uses to correlate their previous communication relationships.
Footnotes:
1 MPI tutorial, https://computing.llnl.gov/tutorials/mpi/#Point_to_Point_Routines
2 cooperative communication, https://computing.llnl.gov/tutorials/mpi/#Collective_Communication_Routines
Date:mon Jul 15 11:55:20 2013
Author:tan Menglong
ORG version 7.9.3f with Emacs version 24
Validate XHTML 1.0
©menglong TAN
"Go" MPI Getting Started