Point-to-point communication requires that the send and Recv pairs be available.
Point-to-point communication has 12 pairs, corresponding to the blocking mode 1 group (4) and non-blocking mode 2 groups respectively
Category |
Send |
Accept |
Description |
Blocking Communication |
Mpi_send Mpi_bsend Mpi_rsend Mpi_ssend |
Mpi_recv Mpi_irecv Mpi_recv_init |
If the accepted action is used Mpi_irecv Mpi_recv_init, use the mpi_request object for testing. |
Non-blocking communication (Not repeated) |
Mpi_isend Mpi_ibsend Mpi_irsend Mpi_issend |
Mpi_recv Mpi_irecv Mpi_recv_init |
You need to use the mpi_request object for relevant testing and running. |
Non-blocking communication (Repeated) |
Mpi_send_init Mpi_bsend_init Mpi_rsend_init Mpi_ssend_init |
Mpi_recv Mpi_irecv Mpi_recv_init |
Same as above |
The sending function modes are MPI _ ** send, B Indicates buffer, r indicates ready, and s indicates synchronous ), I indicates sending immediately (imediately ). Without any modification, the mpi_send method is called the standard mode.
Data transmission process of message communication:
A. the sender calls MPI _ ** send to send data;
B. the MPI Environment extracts the data to be sent from the sending buffer and assembles and sends messages accordingly;
C. Send the assembled message to the target;
D. The receiver receives a matched message and resolves it to the receiver buffer.
2.1Blocking Communication
Blocking communication means that the send call of the message sender can be completed only by the cooperation of the receiver's Recv call.
· Standard communication mode
For details, see the example: Standard blocking mode call (CodeCode)
// Eg2: Calls int myid, numprocs, proid, Sb [buf_size], Rb [buf_size]; mpi_status status; mpi_init (& argc, & argv) in standard blocking mode ); mpi_comm_size (mpi_comm_world, & numprocs); mpi_comm_rank (mpi_comm_world, & myid); For (INT I = 0; I <buf_size; I ++) {Sb [I] = myid + 1;} If (myid = 0) proid = 1; if (myid = 1) proid = 0; If (myid = 0) {cout <"process" <myid <"of" <numprocs <"Trying send... "<Endl; mpi_send (SB, buf_size, mpi_int, proid, 1, mpi_comm_world ); cout <"process" <myid <"of" <numprocs <"Trying Recv... "<Endl; mpi_recv (RB, buf_size, mpi_int, proid, 1, mpi_comm_world, & status);} If (myid = 1) {cout <"process" <myid <"of" <numprocs <"Trying Recv... "<Endl; mpi_recv (RB, buf_size, mpi_int, proid, 1, mpi_comm_world, & status ); cout <"process" <myid <"of" <numprocs <"Trying send... "<Endl; mpi_send (SB, buf_size, mpi_int, proid, 1, mpi_comm_world) ;}cout <" hello, process "<myid <" of "<numprocs <Endl; mpi_finalize ();
· Buffer Communication Mode
This mode is mainly used to solve the coupling between sending and receiving of blocked communication.
APIs used:
Mpi_pack_size: Calculate the buffer size for each message;
Mpi_buffer_attach: assemble the buffer for communication;
Mpi_bsend/mpi_recv: Send and accept data
Mpi_buffer_detach: unmount the buffer for communication;
· Ready communication mode
Data can be sent only when the recipient accepts the operation and is ready. (The sending method is exactly the same as the standard sending method, but an additional message is sent to the MPI Environment: The receiving action is ready, and direct sending is.
MPI used: mpi_rsend;
When necessary, you need to synchronize at a certain time point: mpi_barrier (mpi_comm_world );
· Synchronous communication mode
Whether or not the receiving end starts the receiving action, the sending end can start the sending action at any time. However, the sender must wait for the receiving action of the receiver to send and start receiving data before it can end.
Used API: mpi_ssend;
Note: For various communication modes, the transmission method is different, but the receiving method is mpi_recv.
Recommended communication mode: You can use the ready transmission mode for sending short messages, while the synchronous transmission mode for sending long messages. If performance improvement measures are ignored, the ready mode can be implemented as the standard communication mode.
2.2Non-blocking communication
Overlapping communication and computing greatly improves performance, especially those systems with independent communication control hardware. Multithreading is an important way to achieve this overlap. In addition, non-blocking communication operations are used.
More: http://blog.donews.com/me1105/archive/2011/02/15/129.aspx